2026-03-09 00:00:07.489785 | Job console starting 2026-03-09 00:00:07.503985 | Updating git repos 2026-03-09 00:00:07.678472 | Cloning repos into workspace 2026-03-09 00:00:07.949448 | Restoring repo states 2026-03-09 00:00:07.992730 | Merging changes 2026-03-09 00:00:07.992753 | Checking out repos 2026-03-09 00:00:08.369946 | Preparing playbooks 2026-03-09 00:00:09.380750 | Running Ansible setup 2026-03-09 00:00:16.692343 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-09 00:00:19.296075 | 2026-03-09 00:00:19.296217 | PLAY [Base pre] 2026-03-09 00:00:19.319888 | 2026-03-09 00:00:19.320019 | TASK [Setup log path fact] 2026-03-09 00:00:19.366292 | orchestrator | ok 2026-03-09 00:00:19.387393 | 2026-03-09 00:00:19.387538 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-09 00:00:19.418220 | orchestrator | ok 2026-03-09 00:00:19.467463 | 2026-03-09 00:00:19.467595 | TASK [emit-job-header : Print job information] 2026-03-09 00:00:19.590190 | # Job Information 2026-03-09 00:00:19.590497 | Ansible Version: 2.16.14 2026-03-09 00:00:19.590538 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-09 00:00:19.590580 | Pipeline: periodic-midnight 2026-03-09 00:00:19.590792 | Executor: 521e9411259a 2026-03-09 00:00:19.590824 | Triggered by: https://github.com/osism/testbed 2026-03-09 00:00:19.590937 | Event ID: ba3e5e257f914ab0a0c5d45d3402b562 2026-03-09 00:00:19.618215 | 2026-03-09 00:00:19.618387 | LOOP [emit-job-header : Print node information] 2026-03-09 00:00:19.929805 | orchestrator | ok: 2026-03-09 00:00:19.930035 | orchestrator | # Node Information 2026-03-09 00:00:19.930075 | orchestrator | Inventory Hostname: orchestrator 2026-03-09 00:00:19.930103 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-09 00:00:19.930128 | orchestrator | Username: zuul-testbed01 2026-03-09 00:00:19.930152 | orchestrator | Distro: Debian 12.13 2026-03-09 00:00:19.930178 | orchestrator | Provider: static-testbed 2026-03-09 00:00:19.930201 | orchestrator | Region: 2026-03-09 00:00:19.930224 | orchestrator | Label: testbed-orchestrator 2026-03-09 00:00:19.930245 | orchestrator | Product Name: OpenStack Nova 2026-03-09 00:00:19.930266 | orchestrator | Interface IP: 81.163.193.140 2026-03-09 00:00:19.953971 | 2026-03-09 00:00:19.954077 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-09 00:00:21.567521 | orchestrator -> localhost | changed 2026-03-09 00:00:21.578804 | 2026-03-09 00:00:21.578970 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-09 00:00:23.977183 | orchestrator -> localhost | changed 2026-03-09 00:00:23.989201 | 2026-03-09 00:00:23.989303 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-09 00:00:24.553261 | orchestrator -> localhost | ok 2026-03-09 00:00:24.558818 | 2026-03-09 00:00:24.558929 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-09 00:00:24.575837 | orchestrator | ok 2026-03-09 00:00:24.590895 | orchestrator | included: /var/lib/zuul/builds/9a456cfc94b04f73a04fd6c3a5a67d43/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-09 00:00:24.615629 | 2026-03-09 00:00:24.615749 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-09 00:00:31.417166 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-09 00:00:31.418224 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/9a456cfc94b04f73a04fd6c3a5a67d43/work/9a456cfc94b04f73a04fd6c3a5a67d43_id_rsa 2026-03-09 00:00:31.418293 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/9a456cfc94b04f73a04fd6c3a5a67d43/work/9a456cfc94b04f73a04fd6c3a5a67d43_id_rsa.pub 2026-03-09 00:00:31.418317 | orchestrator -> localhost | The key fingerprint is: 2026-03-09 00:00:31.418343 | orchestrator -> localhost | SHA256:TysaFJWWhsquFKV8i6sQEj9HFs4eB+fEwtZxyAU7UKg zuul-build-sshkey 2026-03-09 00:00:31.418362 | orchestrator -> localhost | The key's randomart image is: 2026-03-09 00:00:31.418388 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-09 00:00:31.418407 | orchestrator -> localhost | | .+***+o | 2026-03-09 00:00:31.418426 | orchestrator -> localhost | | o*B*+= | 2026-03-09 00:00:31.418442 | orchestrator -> localhost | |.. **o*o | 2026-03-09 00:00:31.418458 | orchestrator -> localhost | | oE++o o | 2026-03-09 00:00:31.418475 | orchestrator -> localhost | |o o=o.. S . | 2026-03-09 00:00:31.418496 | orchestrator -> localhost | |..ooo. o . | 2026-03-09 00:00:31.418513 | orchestrator -> localhost | |.. o . . o | 2026-03-09 00:00:31.418529 | orchestrator -> localhost | |. o o . | 2026-03-09 00:00:31.418547 | orchestrator -> localhost | |.. . | 2026-03-09 00:00:31.418564 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-09 00:00:31.418616 | orchestrator -> localhost | ok: Runtime: 0:00:05.601677 2026-03-09 00:00:31.424541 | 2026-03-09 00:00:31.424622 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-09 00:00:31.463853 | orchestrator | ok 2026-03-09 00:00:31.478001 | orchestrator | included: /var/lib/zuul/builds/9a456cfc94b04f73a04fd6c3a5a67d43/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-09 00:00:31.503918 | 2026-03-09 00:00:31.504017 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-09 00:00:31.545108 | orchestrator | skipping: Conditional result was False 2026-03-09 00:00:31.551448 | 2026-03-09 00:00:31.551544 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-09 00:00:32.401573 | orchestrator | changed 2026-03-09 00:00:32.406732 | 2026-03-09 00:00:32.406814 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-09 00:00:32.758948 | orchestrator | ok 2026-03-09 00:00:32.766284 | 2026-03-09 00:00:32.766379 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-09 00:00:33.244338 | orchestrator | ok 2026-03-09 00:00:33.253061 | 2026-03-09 00:00:33.253145 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-09 00:00:33.757231 | orchestrator | ok 2026-03-09 00:00:33.763463 | 2026-03-09 00:00:33.763542 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-09 00:00:33.780138 | orchestrator | skipping: Conditional result was False 2026-03-09 00:00:33.810979 | 2026-03-09 00:00:33.811094 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-09 00:00:34.955066 | orchestrator -> localhost | changed 2026-03-09 00:00:34.966269 | 2026-03-09 00:00:34.966361 | TASK [add-build-sshkey : Add back temp key] 2026-03-09 00:00:35.938857 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/9a456cfc94b04f73a04fd6c3a5a67d43/work/9a456cfc94b04f73a04fd6c3a5a67d43_id_rsa (zuul-build-sshkey) 2026-03-09 00:00:35.939033 | orchestrator -> localhost | ok: Runtime: 0:00:00.046145 2026-03-09 00:00:35.944840 | 2026-03-09 00:00:35.944922 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-09 00:00:36.626162 | orchestrator | ok 2026-03-09 00:00:36.631147 | 2026-03-09 00:00:36.631231 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-09 00:00:36.673774 | orchestrator | skipping: Conditional result was False 2026-03-09 00:00:36.819297 | 2026-03-09 00:00:36.819398 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-09 00:00:37.330556 | orchestrator | ok 2026-03-09 00:00:37.359529 | 2026-03-09 00:00:37.359659 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-09 00:00:37.435911 | orchestrator | ok 2026-03-09 00:00:37.464385 | 2026-03-09 00:00:37.464504 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-09 00:00:38.513636 | orchestrator -> localhost | ok 2026-03-09 00:00:38.521676 | 2026-03-09 00:00:38.521806 | TASK [validate-host : Collect information about the host] 2026-03-09 00:00:40.451565 | orchestrator | ok 2026-03-09 00:00:40.479053 | 2026-03-09 00:00:40.479165 | TASK [validate-host : Sanitize hostname] 2026-03-09 00:00:40.638898 | orchestrator | ok 2026-03-09 00:00:40.644351 | 2026-03-09 00:00:40.648847 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-09 00:00:41.837963 | orchestrator -> localhost | changed 2026-03-09 00:00:41.843111 | 2026-03-09 00:00:41.843203 | TASK [validate-host : Collect information about zuul worker] 2026-03-09 00:00:42.576993 | orchestrator | ok 2026-03-09 00:00:42.582194 | 2026-03-09 00:00:42.582273 | TASK [validate-host : Write out all zuul information for each host] 2026-03-09 00:00:44.112094 | orchestrator -> localhost | changed 2026-03-09 00:00:44.124056 | 2026-03-09 00:00:44.124152 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-09 00:00:44.623562 | orchestrator | ok 2026-03-09 00:00:44.628381 | 2026-03-09 00:00:44.628466 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-09 00:02:05.196004 | orchestrator | changed: 2026-03-09 00:02:05.196229 | orchestrator | .d..t...... src/ 2026-03-09 00:02:05.196263 | orchestrator | .d..t...... src/github.com/ 2026-03-09 00:02:05.196289 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-09 00:02:05.196311 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-09 00:02:05.196331 | orchestrator | RedHat.yml 2026-03-09 00:02:05.224521 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-09 00:02:05.224539 | orchestrator | RedHat.yml 2026-03-09 00:02:05.224591 | orchestrator | = 1.53.0"... 2026-03-09 00:02:16.526453 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-09 00:02:16.543274 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-09 00:02:16.684981 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-09 00:02:17.517020 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-09 00:02:17.580580 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-09 00:02:18.047341 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-09 00:02:18.109067 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-09 00:02:18.614690 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-09 00:02:18.614778 | orchestrator | 2026-03-09 00:02:18.614787 | orchestrator | Providers are signed by their developers. 2026-03-09 00:02:18.614793 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-09 00:02:18.614797 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-09 00:02:18.614810 | orchestrator | 2026-03-09 00:02:18.614815 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-09 00:02:18.614819 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-09 00:02:18.614830 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-09 00:02:18.614834 | orchestrator | you run "tofu init" in the future. 2026-03-09 00:02:18.615199 | orchestrator | 2026-03-09 00:02:18.615208 | orchestrator | OpenTofu has been successfully initialized! 2026-03-09 00:02:18.615215 | orchestrator | 2026-03-09 00:02:18.615219 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-09 00:02:18.615223 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-09 00:02:18.615231 | orchestrator | should now work. 2026-03-09 00:02:18.615235 | orchestrator | 2026-03-09 00:02:18.615239 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-09 00:02:18.615243 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-09 00:02:18.615251 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-09 00:02:18.792169 | orchestrator | Created and switched to workspace "ci"! 2026-03-09 00:02:18.792284 | orchestrator | 2026-03-09 00:02:18.792292 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-09 00:02:18.792298 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-09 00:02:18.792303 | orchestrator | for this configuration. 2026-03-09 00:02:18.942128 | orchestrator | ci.auto.tfvars 2026-03-09 00:02:19.196978 | orchestrator | default_custom.tf 2026-03-09 00:02:24.171078 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-09 00:02:25.291605 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-09 00:02:25.494613 | orchestrator | 2026-03-09 00:02:25.494683 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-09 00:02:25.494692 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-09 00:02:25.494697 | orchestrator | + create 2026-03-09 00:02:25.494703 | orchestrator | <= read (data resources) 2026-03-09 00:02:25.494708 | orchestrator | 2026-03-09 00:02:25.494712 | orchestrator | OpenTofu will perform the following actions: 2026-03-09 00:02:25.494724 | orchestrator | 2026-03-09 00:02:25.494729 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-09 00:02:25.494733 | orchestrator | # (config refers to values not yet known) 2026-03-09 00:02:25.494737 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-09 00:02:25.494742 | orchestrator | + checksum = (known after apply) 2026-03-09 00:02:25.494760 | orchestrator | + created_at = (known after apply) 2026-03-09 00:02:25.494765 | orchestrator | + file = (known after apply) 2026-03-09 00:02:25.494769 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.494791 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.494795 | orchestrator | + min_disk_gb = (known after apply) 2026-03-09 00:02:25.494799 | orchestrator | + min_ram_mb = (known after apply) 2026-03-09 00:02:25.494803 | orchestrator | + most_recent = true 2026-03-09 00:02:25.494807 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.494812 | orchestrator | + protected = (known after apply) 2026-03-09 00:02:25.494815 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.494822 | orchestrator | + schema = (known after apply) 2026-03-09 00:02:25.494826 | orchestrator | + size_bytes = (known after apply) 2026-03-09 00:02:25.494830 | orchestrator | + tags = (known after apply) 2026-03-09 00:02:25.494834 | orchestrator | + updated_at = (known after apply) 2026-03-09 00:02:25.494838 | orchestrator | } 2026-03-09 00:02:25.494844 | orchestrator | 2026-03-09 00:02:25.494848 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-09 00:02:25.494852 | orchestrator | # (config refers to values not yet known) 2026-03-09 00:02:25.494856 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-09 00:02:25.494860 | orchestrator | + checksum = (known after apply) 2026-03-09 00:02:25.494864 | orchestrator | + created_at = (known after apply) 2026-03-09 00:02:25.494868 | orchestrator | + file = (known after apply) 2026-03-09 00:02:25.494872 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.494876 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.494880 | orchestrator | + min_disk_gb = (known after apply) 2026-03-09 00:02:25.494883 | orchestrator | + min_ram_mb = (known after apply) 2026-03-09 00:02:25.494887 | orchestrator | + most_recent = true 2026-03-09 00:02:25.494891 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.494895 | orchestrator | + protected = (known after apply) 2026-03-09 00:02:25.494899 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.494902 | orchestrator | + schema = (known after apply) 2026-03-09 00:02:25.494906 | orchestrator | + size_bytes = (known after apply) 2026-03-09 00:02:25.494910 | orchestrator | + tags = (known after apply) 2026-03-09 00:02:25.494914 | orchestrator | + updated_at = (known after apply) 2026-03-09 00:02:25.494918 | orchestrator | } 2026-03-09 00:02:25.494923 | orchestrator | 2026-03-09 00:02:25.494927 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-09 00:02:25.494931 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-09 00:02:25.494935 | orchestrator | + content = (known after apply) 2026-03-09 00:02:25.494939 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-09 00:02:25.494943 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-09 00:02:25.494947 | orchestrator | + content_md5 = (known after apply) 2026-03-09 00:02:25.494950 | orchestrator | + content_sha1 = (known after apply) 2026-03-09 00:02:25.494954 | orchestrator | + content_sha256 = (known after apply) 2026-03-09 00:02:25.494958 | orchestrator | + content_sha512 = (known after apply) 2026-03-09 00:02:25.494962 | orchestrator | + directory_permission = "0777" 2026-03-09 00:02:25.494966 | orchestrator | + file_permission = "0644" 2026-03-09 00:02:25.494969 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-09 00:02:25.494973 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.494977 | orchestrator | } 2026-03-09 00:02:25.494982 | orchestrator | 2026-03-09 00:02:25.494986 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-09 00:02:25.494990 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-09 00:02:25.494994 | orchestrator | + content = (known after apply) 2026-03-09 00:02:25.494998 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-09 00:02:25.495002 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-09 00:02:25.495005 | orchestrator | + content_md5 = (known after apply) 2026-03-09 00:02:25.495009 | orchestrator | + content_sha1 = (known after apply) 2026-03-09 00:02:25.495013 | orchestrator | + content_sha256 = (known after apply) 2026-03-09 00:02:25.495017 | orchestrator | + content_sha512 = (known after apply) 2026-03-09 00:02:25.495020 | orchestrator | + directory_permission = "0777" 2026-03-09 00:02:25.495024 | orchestrator | + file_permission = "0644" 2026-03-09 00:02:25.495032 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-09 00:02:25.495036 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.495040 | orchestrator | } 2026-03-09 00:02:25.495045 | orchestrator | 2026-03-09 00:02:25.495054 | orchestrator | # local_file.inventory will be created 2026-03-09 00:02:25.495058 | orchestrator | + resource "local_file" "inventory" { 2026-03-09 00:02:25.495062 | orchestrator | + content = (known after apply) 2026-03-09 00:02:25.495066 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-09 00:02:25.495070 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-09 00:02:25.495074 | orchestrator | + content_md5 = (known after apply) 2026-03-09 00:02:25.495078 | orchestrator | + content_sha1 = (known after apply) 2026-03-09 00:02:25.495082 | orchestrator | + content_sha256 = (known after apply) 2026-03-09 00:02:25.495085 | orchestrator | + content_sha512 = (known after apply) 2026-03-09 00:02:25.495089 | orchestrator | + directory_permission = "0777" 2026-03-09 00:02:25.495093 | orchestrator | + file_permission = "0644" 2026-03-09 00:02:25.495097 | orchestrator | + filename = "inventory.ci" 2026-03-09 00:02:25.495101 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.495105 | orchestrator | } 2026-03-09 00:02:25.495110 | orchestrator | 2026-03-09 00:02:25.495114 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-09 00:02:25.495118 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-09 00:02:25.495122 | orchestrator | + content = (sensitive value) 2026-03-09 00:02:25.495125 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-09 00:02:25.495129 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-09 00:02:25.495133 | orchestrator | + content_md5 = (known after apply) 2026-03-09 00:02:25.495137 | orchestrator | + content_sha1 = (known after apply) 2026-03-09 00:02:25.495141 | orchestrator | + content_sha256 = (known after apply) 2026-03-09 00:02:25.495145 | orchestrator | + content_sha512 = (known after apply) 2026-03-09 00:02:25.495149 | orchestrator | + directory_permission = "0700" 2026-03-09 00:02:25.495152 | orchestrator | + file_permission = "0600" 2026-03-09 00:02:25.495156 | orchestrator | + filename = ".id_rsa.ci" 2026-03-09 00:02:25.495160 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.495164 | orchestrator | } 2026-03-09 00:02:25.495168 | orchestrator | 2026-03-09 00:02:25.495172 | orchestrator | # null_resource.node_semaphore will be created 2026-03-09 00:02:25.495176 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-09 00:02:25.495179 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.495183 | orchestrator | } 2026-03-09 00:02:25.495189 | orchestrator | 2026-03-09 00:02:25.495193 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-09 00:02:25.495197 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-09 00:02:25.495200 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.495204 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.495208 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.495212 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.495216 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.495220 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-09 00:02:25.495224 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.495228 | orchestrator | + size = 80 2026-03-09 00:02:25.495232 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.495235 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.495239 | orchestrator | } 2026-03-09 00:02:25.495301 | orchestrator | 2026-03-09 00:02:25.495308 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-09 00:02:25.495312 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:25.495315 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.495319 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.495323 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.495331 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.495335 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.495339 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-09 00:02:25.495343 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.495347 | orchestrator | + size = 80 2026-03-09 00:02:25.495351 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.495354 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.495358 | orchestrator | } 2026-03-09 00:02:25.495427 | orchestrator | 2026-03-09 00:02:25.495433 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-09 00:02:25.495436 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:25.495440 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.495444 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.495448 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.495452 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.495456 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.495459 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-09 00:02:25.495463 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.495467 | orchestrator | + size = 80 2026-03-09 00:02:25.495471 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.495475 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.495479 | orchestrator | } 2026-03-09 00:02:25.495522 | orchestrator | 2026-03-09 00:02:25.495527 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-09 00:02:25.495531 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:25.495534 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.495538 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.495542 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.495546 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.495550 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.495554 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-09 00:02:25.495557 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.495561 | orchestrator | + size = 80 2026-03-09 00:02:25.495565 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.495569 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.495573 | orchestrator | } 2026-03-09 00:02:25.495592 | orchestrator | 2026-03-09 00:02:25.495597 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-09 00:02:25.495601 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:25.495604 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.495608 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.495612 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.495616 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.495620 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.495627 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-09 00:02:25.495631 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.495635 | orchestrator | + size = 80 2026-03-09 00:02:25.495639 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.495642 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.495646 | orchestrator | } 2026-03-09 00:02:25.495652 | orchestrator | 2026-03-09 00:02:25.495656 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-09 00:02:25.495660 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:25.495664 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.495668 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.495671 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.495679 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.495683 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.495687 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-09 00:02:25.495691 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.495695 | orchestrator | + size = 80 2026-03-09 00:02:25.495699 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.495703 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.495706 | orchestrator | } 2026-03-09 00:02:25.495731 | orchestrator | 2026-03-09 00:02:25.495736 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-09 00:02:25.495740 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:25.495756 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.495760 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.495764 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.495768 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.495772 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.495776 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-09 00:02:25.495780 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.495783 | orchestrator | + size = 80 2026-03-09 00:02:25.495787 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.495791 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.495795 | orchestrator | } 2026-03-09 00:02:25.495826 | orchestrator | 2026-03-09 00:02:25.495831 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-09 00:02:25.495836 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.495839 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.495843 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.495847 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.495851 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.495855 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-09 00:02:25.495859 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.495863 | orchestrator | + size = 20 2026-03-09 00:02:25.495867 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.495871 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.495875 | orchestrator | } 2026-03-09 00:02:25.495894 | orchestrator | 2026-03-09 00:02:25.495899 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-09 00:02:25.495903 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.495906 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.495910 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.495914 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.495918 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.495922 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-09 00:02:25.495926 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.495929 | orchestrator | + size = 20 2026-03-09 00:02:25.495933 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.495937 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.495941 | orchestrator | } 2026-03-09 00:02:25.495960 | orchestrator | 2026-03-09 00:02:25.495964 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-09 00:02:25.495968 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.495972 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.495976 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.495980 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.495984 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.495987 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-09 00:02:25.495991 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.495999 | orchestrator | + size = 20 2026-03-09 00:02:25.496003 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.496007 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.496010 | orchestrator | } 2026-03-09 00:02:25.496016 | orchestrator | 2026-03-09 00:02:25.496020 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-09 00:02:25.496024 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.496027 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.496031 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.496035 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.496039 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.496043 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-09 00:02:25.496047 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.496051 | orchestrator | + size = 20 2026-03-09 00:02:25.496055 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.496058 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.496062 | orchestrator | } 2026-03-09 00:02:25.496087 | orchestrator | 2026-03-09 00:02:25.496092 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-09 00:02:25.496096 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.496100 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.496104 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.496108 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.496111 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.496115 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-09 00:02:25.496119 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.496125 | orchestrator | + size = 20 2026-03-09 00:02:25.496130 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.496133 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.496137 | orchestrator | } 2026-03-09 00:02:25.496143 | orchestrator | 2026-03-09 00:02:25.496147 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-09 00:02:25.496151 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.496155 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.496159 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.496162 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.496166 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.496170 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-09 00:02:25.496174 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.496178 | orchestrator | + size = 20 2026-03-09 00:02:25.496182 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.496186 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.496190 | orchestrator | } 2026-03-09 00:02:25.496215 | orchestrator | 2026-03-09 00:02:25.496219 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-09 00:02:25.496223 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.496227 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.496231 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.496235 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.496239 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.496242 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-09 00:02:25.496246 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.496250 | orchestrator | + size = 20 2026-03-09 00:02:25.496254 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.496258 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.496262 | orchestrator | } 2026-03-09 00:02:25.496281 | orchestrator | 2026-03-09 00:02:25.496286 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-09 00:02:25.496290 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.496302 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.496306 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.496310 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.496314 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.496317 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-09 00:02:25.496321 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.496325 | orchestrator | + size = 20 2026-03-09 00:02:25.496329 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.496333 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.496337 | orchestrator | } 2026-03-09 00:02:25.496343 | orchestrator | 2026-03-09 00:02:25.496347 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-09 00:02:25.496350 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:25.496354 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:25.496358 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.496362 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.496366 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:25.496370 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-09 00:02:25.496374 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.496377 | orchestrator | + size = 20 2026-03-09 00:02:25.496381 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:25.496385 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:25.496389 | orchestrator | } 2026-03-09 00:02:25.496658 | orchestrator | 2026-03-09 00:02:25.496664 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-09 00:02:25.496668 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-09 00:02:25.496671 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:25.496675 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:25.496679 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:25.496683 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.496687 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.496690 | orchestrator | + config_drive = true 2026-03-09 00:02:25.496694 | orchestrator | + created = (known after apply) 2026-03-09 00:02:25.496698 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:25.496702 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-09 00:02:25.496706 | orchestrator | + force_delete = false 2026-03-09 00:02:25.496710 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:25.496713 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.496717 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.496721 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:25.496725 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:25.496729 | orchestrator | + name = "testbed-manager" 2026-03-09 00:02:25.496732 | orchestrator | + power_state = "active" 2026-03-09 00:02:25.496736 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.496740 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:25.496754 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:25.496758 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:25.496762 | orchestrator | + user_data = (sensitive value) 2026-03-09 00:02:25.496766 | orchestrator | 2026-03-09 00:02:25.496770 | orchestrator | + block_device { 2026-03-09 00:02:25.496774 | orchestrator | + boot_index = 0 2026-03-09 00:02:25.496777 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:25.496784 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:25.496788 | orchestrator | + multiattach = false 2026-03-09 00:02:25.496792 | orchestrator | + source_type = "volume" 2026-03-09 00:02:25.496796 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.496803 | orchestrator | } 2026-03-09 00:02:25.496807 | orchestrator | 2026-03-09 00:02:25.496811 | orchestrator | + network { 2026-03-09 00:02:25.496815 | orchestrator | + access_network = false 2026-03-09 00:02:25.496818 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:25.496822 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:25.496826 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:25.496830 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.496833 | orchestrator | + port = (known after apply) 2026-03-09 00:02:25.496837 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.496841 | orchestrator | } 2026-03-09 00:02:25.496845 | orchestrator | } 2026-03-09 00:02:25.496917 | orchestrator | 2026-03-09 00:02:25.496923 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-09 00:02:25.496927 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:25.496931 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:25.496935 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:25.496939 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:25.496943 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.496946 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.496950 | orchestrator | + config_drive = true 2026-03-09 00:02:25.496954 | orchestrator | + created = (known after apply) 2026-03-09 00:02:25.496958 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:25.496961 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:25.496965 | orchestrator | + force_delete = false 2026-03-09 00:02:25.496969 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:25.496973 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.496977 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.496981 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:25.496984 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:25.496988 | orchestrator | + name = "testbed-node-0" 2026-03-09 00:02:25.496992 | orchestrator | + power_state = "active" 2026-03-09 00:02:25.496996 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.497000 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:25.497003 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:25.497007 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:25.497011 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:25.497015 | orchestrator | 2026-03-09 00:02:25.497019 | orchestrator | + block_device { 2026-03-09 00:02:25.497023 | orchestrator | + boot_index = 0 2026-03-09 00:02:25.497026 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:25.497030 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:25.497034 | orchestrator | + multiattach = false 2026-03-09 00:02:25.497038 | orchestrator | + source_type = "volume" 2026-03-09 00:02:25.497042 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.497045 | orchestrator | } 2026-03-09 00:02:25.497049 | orchestrator | 2026-03-09 00:02:25.497053 | orchestrator | + network { 2026-03-09 00:02:25.497057 | orchestrator | + access_network = false 2026-03-09 00:02:25.497061 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:25.497065 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:25.497068 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:25.497072 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.497076 | orchestrator | + port = (known after apply) 2026-03-09 00:02:25.497080 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.497084 | orchestrator | } 2026-03-09 00:02:25.497087 | orchestrator | } 2026-03-09 00:02:25.497204 | orchestrator | 2026-03-09 00:02:25.497210 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-09 00:02:25.497213 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:25.497217 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:25.497225 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:25.497228 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:25.497232 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.497236 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.497240 | orchestrator | + config_drive = true 2026-03-09 00:02:25.497244 | orchestrator | + created = (known after apply) 2026-03-09 00:02:25.497247 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:25.497251 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:25.497255 | orchestrator | + force_delete = false 2026-03-09 00:02:25.497259 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:25.497262 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.497266 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.497270 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:25.497274 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:25.497278 | orchestrator | + name = "testbed-node-1" 2026-03-09 00:02:25.497281 | orchestrator | + power_state = "active" 2026-03-09 00:02:25.497285 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.497289 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:25.497293 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:25.497297 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:25.497300 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:25.497304 | orchestrator | 2026-03-09 00:02:25.497308 | orchestrator | + block_device { 2026-03-09 00:02:25.497312 | orchestrator | + boot_index = 0 2026-03-09 00:02:25.497316 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:25.497319 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:25.497323 | orchestrator | + multiattach = false 2026-03-09 00:02:25.497327 | orchestrator | + source_type = "volume" 2026-03-09 00:02:25.497331 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.497335 | orchestrator | } 2026-03-09 00:02:25.497338 | orchestrator | 2026-03-09 00:02:25.497342 | orchestrator | + network { 2026-03-09 00:02:25.497346 | orchestrator | + access_network = false 2026-03-09 00:02:25.497350 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:25.497353 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:25.497357 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:25.497361 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.497365 | orchestrator | + port = (known after apply) 2026-03-09 00:02:25.497369 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.497372 | orchestrator | } 2026-03-09 00:02:25.497376 | orchestrator | } 2026-03-09 00:02:25.497436 | orchestrator | 2026-03-09 00:02:25.497442 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-09 00:02:25.497446 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:25.497450 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:25.497454 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:25.497458 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:25.497462 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.497468 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.497472 | orchestrator | + config_drive = true 2026-03-09 00:02:25.497476 | orchestrator | + created = (known after apply) 2026-03-09 00:02:25.497480 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:25.497484 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:25.497488 | orchestrator | + force_delete = false 2026-03-09 00:02:25.497491 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:25.497495 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.497499 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.497506 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:25.497510 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:25.497514 | orchestrator | + name = "testbed-node-2" 2026-03-09 00:02:25.497518 | orchestrator | + power_state = "active" 2026-03-09 00:02:25.497521 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.497525 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:25.497529 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:25.497533 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:25.497537 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:25.497541 | orchestrator | 2026-03-09 00:02:25.497544 | orchestrator | + block_device { 2026-03-09 00:02:25.497548 | orchestrator | + boot_index = 0 2026-03-09 00:02:25.497552 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:25.497556 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:25.497559 | orchestrator | + multiattach = false 2026-03-09 00:02:25.497563 | orchestrator | + source_type = "volume" 2026-03-09 00:02:25.497567 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.497571 | orchestrator | } 2026-03-09 00:02:25.497575 | orchestrator | 2026-03-09 00:02:25.497579 | orchestrator | + network { 2026-03-09 00:02:25.497582 | orchestrator | + access_network = false 2026-03-09 00:02:25.497586 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:25.497590 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:25.497594 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:25.497598 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.497601 | orchestrator | + port = (known after apply) 2026-03-09 00:02:25.497605 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.497609 | orchestrator | } 2026-03-09 00:02:25.497613 | orchestrator | } 2026-03-09 00:02:25.497833 | orchestrator | 2026-03-09 00:02:25.497839 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-09 00:02:25.497843 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:25.497847 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:25.497851 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:25.497855 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:25.497858 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.497862 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.497866 | orchestrator | + config_drive = true 2026-03-09 00:02:25.497870 | orchestrator | + created = (known after apply) 2026-03-09 00:02:25.497874 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:25.497877 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:25.497881 | orchestrator | + force_delete = false 2026-03-09 00:02:25.497885 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:25.497889 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.497893 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.497896 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:25.497900 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:25.497904 | orchestrator | + name = "testbed-node-3" 2026-03-09 00:02:25.497908 | orchestrator | + power_state = "active" 2026-03-09 00:02:25.497911 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.497915 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:25.497919 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:25.497923 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:25.497927 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:25.497930 | orchestrator | 2026-03-09 00:02:25.497934 | orchestrator | + block_device { 2026-03-09 00:02:25.497941 | orchestrator | + boot_index = 0 2026-03-09 00:02:25.497945 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:25.497949 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:25.497956 | orchestrator | + multiattach = false 2026-03-09 00:02:25.497960 | orchestrator | + source_type = "volume" 2026-03-09 00:02:25.497964 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.497968 | orchestrator | } 2026-03-09 00:02:25.497971 | orchestrator | 2026-03-09 00:02:25.497975 | orchestrator | + network { 2026-03-09 00:02:25.497979 | orchestrator | + access_network = false 2026-03-09 00:02:25.497983 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:25.497987 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:25.497990 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:25.497994 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.497998 | orchestrator | + port = (known after apply) 2026-03-09 00:02:25.498002 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.498005 | orchestrator | } 2026-03-09 00:02:25.498009 | orchestrator | } 2026-03-09 00:02:25.498080 | orchestrator | 2026-03-09 00:02:25.498085 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-09 00:02:25.498089 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:25.498093 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:25.498097 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:25.498101 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:25.498104 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.498108 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.498112 | orchestrator | + config_drive = true 2026-03-09 00:02:25.498116 | orchestrator | + created = (known after apply) 2026-03-09 00:02:25.498120 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:25.498124 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:25.498128 | orchestrator | + force_delete = false 2026-03-09 00:02:25.498131 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:25.498135 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.498139 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.498143 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:25.498147 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:25.498151 | orchestrator | + name = "testbed-node-4" 2026-03-09 00:02:25.498154 | orchestrator | + power_state = "active" 2026-03-09 00:02:25.498158 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.498162 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:25.498166 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:25.498170 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:25.498174 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:25.498177 | orchestrator | 2026-03-09 00:02:25.498181 | orchestrator | + block_device { 2026-03-09 00:02:25.498185 | orchestrator | + boot_index = 0 2026-03-09 00:02:25.498189 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:25.498193 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:25.498197 | orchestrator | + multiattach = false 2026-03-09 00:02:25.498200 | orchestrator | + source_type = "volume" 2026-03-09 00:02:25.498204 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.498208 | orchestrator | } 2026-03-09 00:02:25.498212 | orchestrator | 2026-03-09 00:02:25.498216 | orchestrator | + network { 2026-03-09 00:02:25.498220 | orchestrator | + access_network = false 2026-03-09 00:02:25.498224 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:25.498227 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:25.498231 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:25.498235 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.498239 | orchestrator | + port = (known after apply) 2026-03-09 00:02:25.498243 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.498247 | orchestrator | } 2026-03-09 00:02:25.498250 | orchestrator | } 2026-03-09 00:02:25.498305 | orchestrator | 2026-03-09 00:02:25.498310 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-09 00:02:25.498314 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:25.498318 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:25.498322 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:25.498326 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:25.498329 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.498333 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:25.498337 | orchestrator | + config_drive = true 2026-03-09 00:02:25.498341 | orchestrator | + created = (known after apply) 2026-03-09 00:02:25.498345 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:25.498349 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:25.498352 | orchestrator | + force_delete = false 2026-03-09 00:02:25.498359 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:25.498363 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.498367 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:25.498371 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:25.498374 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:25.498378 | orchestrator | + name = "testbed-node-5" 2026-03-09 00:02:25.498382 | orchestrator | + power_state = "active" 2026-03-09 00:02:25.498386 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.498390 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:25.498393 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:25.498397 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:25.498401 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:25.498405 | orchestrator | 2026-03-09 00:02:25.498409 | orchestrator | + block_device { 2026-03-09 00:02:25.498413 | orchestrator | + boot_index = 0 2026-03-09 00:02:25.498416 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:25.498420 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:25.498424 | orchestrator | + multiattach = false 2026-03-09 00:02:25.498428 | orchestrator | + source_type = "volume" 2026-03-09 00:02:25.498432 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.498435 | orchestrator | } 2026-03-09 00:02:25.498439 | orchestrator | 2026-03-09 00:02:25.498443 | orchestrator | + network { 2026-03-09 00:02:25.498447 | orchestrator | + access_network = false 2026-03-09 00:02:25.498451 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:25.498455 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:25.498458 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:25.498462 | orchestrator | + name = (known after apply) 2026-03-09 00:02:25.498466 | orchestrator | + port = (known after apply) 2026-03-09 00:02:25.498470 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:25.498474 | orchestrator | } 2026-03-09 00:02:25.498478 | orchestrator | } 2026-03-09 00:02:25.498483 | orchestrator | 2026-03-09 00:02:25.498487 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-09 00:02:25.498491 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-09 00:02:25.498495 | orchestrator | + fingerprint = (known after apply) 2026-03-09 00:02:25.498499 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.498503 | orchestrator | + name = "testbed" 2026-03-09 00:02:25.498507 | orchestrator | + private_key = (sensitive value) 2026-03-09 00:02:25.498510 | orchestrator | + public_key = (known after apply) 2026-03-09 00:02:25.498514 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.498518 | orchestrator | + user_id = (known after apply) 2026-03-09 00:02:25.498522 | orchestrator | } 2026-03-09 00:02:25.498526 | orchestrator | 2026-03-09 00:02:25.498530 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-09 00:02:25.498534 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.498541 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.498545 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.498548 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.498552 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.498556 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.498560 | orchestrator | } 2026-03-09 00:02:25.498564 | orchestrator | 2026-03-09 00:02:25.498568 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-09 00:02:25.498571 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.498575 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.498579 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.498583 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.498587 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.498590 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.498594 | orchestrator | } 2026-03-09 00:02:25.498598 | orchestrator | 2026-03-09 00:02:25.498602 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-09 00:02:25.498606 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.498610 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.498614 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.498617 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.498621 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.498625 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.498629 | orchestrator | } 2026-03-09 00:02:25.498634 | orchestrator | 2026-03-09 00:02:25.498638 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-09 00:02:25.498642 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.498646 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.498650 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.498654 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.498657 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.498661 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.498665 | orchestrator | } 2026-03-09 00:02:25.498669 | orchestrator | 2026-03-09 00:02:25.498673 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-09 00:02:25.498677 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.498681 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.498684 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.498688 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.498695 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.498699 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.498702 | orchestrator | } 2026-03-09 00:02:25.498706 | orchestrator | 2026-03-09 00:02:25.498710 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-09 00:02:25.498714 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.498718 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.498722 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.498725 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.498729 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.498733 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.498737 | orchestrator | } 2026-03-09 00:02:25.498741 | orchestrator | 2026-03-09 00:02:25.498765 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-09 00:02:25.498769 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.498773 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.498777 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.498781 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.498784 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.498792 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.498796 | orchestrator | } 2026-03-09 00:02:25.498801 | orchestrator | 2026-03-09 00:02:25.498805 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-09 00:02:25.498809 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.498813 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.498817 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.498821 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.498825 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.498829 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.498832 | orchestrator | } 2026-03-09 00:02:25.498836 | orchestrator | 2026-03-09 00:02:25.498840 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-09 00:02:25.498844 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:25.498848 | orchestrator | + device = (known after apply) 2026-03-09 00:02:25.498852 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.498856 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:25.498859 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.498863 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:25.498867 | orchestrator | } 2026-03-09 00:02:25.498871 | orchestrator | 2026-03-09 00:02:25.498875 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-09 00:02:25.498879 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-09 00:02:25.498883 | orchestrator | + fixed_ip = (known after apply) 2026-03-09 00:02:25.498887 | orchestrator | + floating_ip = (known after apply) 2026-03-09 00:02:25.498891 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.498895 | orchestrator | + port_id = (known after apply) 2026-03-09 00:02:25.498899 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.498902 | orchestrator | } 2026-03-09 00:02:25.498908 | orchestrator | 2026-03-09 00:02:25.498912 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-09 00:02:25.498916 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-09 00:02:25.498919 | orchestrator | + address = (known after apply) 2026-03-09 00:02:25.498923 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.498927 | orchestrator | + dns_domain = (known after apply) 2026-03-09 00:02:25.498931 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.498935 | orchestrator | + fixed_ip = (known after apply) 2026-03-09 00:02:25.498939 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.498943 | orchestrator | + pool = "public" 2026-03-09 00:02:25.498946 | orchestrator | + port_id = (known after apply) 2026-03-09 00:02:25.498950 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.498954 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.498958 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.498962 | orchestrator | } 2026-03-09 00:02:25.499024 | orchestrator | 2026-03-09 00:02:25.499031 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-09 00:02:25.499035 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-09 00:02:25.499039 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.499043 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.499046 | orchestrator | + availability_zone_hints = [ 2026-03-09 00:02:25.499050 | orchestrator | + "nova", 2026-03-09 00:02:25.499054 | orchestrator | ] 2026-03-09 00:02:25.499058 | orchestrator | + dns_domain = (known after apply) 2026-03-09 00:02:25.499062 | orchestrator | + external = (known after apply) 2026-03-09 00:02:25.499066 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.499070 | orchestrator | + mtu = (known after apply) 2026-03-09 00:02:25.499074 | orchestrator | + name = "net-testbed-management" 2026-03-09 00:02:25.499078 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.499084 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.499088 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.499092 | orchestrator | + shared = (known after apply) 2026-03-09 00:02:25.499096 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.499100 | orchestrator | + transparent_vlan = (known after apply) 2026-03-09 00:02:25.499104 | orchestrator | 2026-03-09 00:02:25.499108 | orchestrator | + segments (known after apply) 2026-03-09 00:02:25.499111 | orchestrator | } 2026-03-09 00:02:25.499262 | orchestrator | 2026-03-09 00:02:25.499268 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-09 00:02:25.499272 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-09 00:02:25.499276 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.499279 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:25.499283 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:25.499290 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.499293 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:25.499297 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:25.499301 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:25.499305 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.499309 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.499313 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:25.499316 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.499320 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.499324 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.499328 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.499332 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:25.499335 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.499339 | orchestrator | 2026-03-09 00:02:25.499343 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.499347 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:25.499351 | orchestrator | } 2026-03-09 00:02:25.499354 | orchestrator | 2026-03-09 00:02:25.499358 | orchestrator | + binding (known after apply) 2026-03-09 00:02:25.499362 | orchestrator | 2026-03-09 00:02:25.499366 | orchestrator | + fixed_ip { 2026-03-09 00:02:25.499370 | orchestrator | + ip_address = "192.168.16.5" 2026-03-09 00:02:25.499374 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.499377 | orchestrator | } 2026-03-09 00:02:25.499381 | orchestrator | } 2026-03-09 00:02:25.499503 | orchestrator | 2026-03-09 00:02:25.499510 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-09 00:02:25.499514 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:25.499518 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.499521 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:25.499525 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:25.499529 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.499533 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:25.499537 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:25.499540 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:25.499544 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.499548 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.499552 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:25.499556 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.499559 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.499563 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.499567 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.499576 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:25.499579 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.499583 | orchestrator | 2026-03-09 00:02:25.499587 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.499591 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:25.499595 | orchestrator | } 2026-03-09 00:02:25.499599 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.499602 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:25.499606 | orchestrator | } 2026-03-09 00:02:25.499610 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.499614 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:25.499618 | orchestrator | } 2026-03-09 00:02:25.499621 | orchestrator | 2026-03-09 00:02:25.499625 | orchestrator | + binding (known after apply) 2026-03-09 00:02:25.499629 | orchestrator | 2026-03-09 00:02:25.499633 | orchestrator | + fixed_ip { 2026-03-09 00:02:25.499637 | orchestrator | + ip_address = "192.168.16.10" 2026-03-09 00:02:25.499640 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.499644 | orchestrator | } 2026-03-09 00:02:25.499648 | orchestrator | } 2026-03-09 00:02:25.499805 | orchestrator | 2026-03-09 00:02:25.499810 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-09 00:02:25.499814 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:25.499818 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.499822 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:25.499826 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:25.499830 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.499833 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:25.499837 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:25.499841 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:25.499845 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.499849 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.499853 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:25.499857 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.499860 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.499864 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.499868 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.499872 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:25.499876 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.499880 | orchestrator | 2026-03-09 00:02:25.499883 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.499887 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:25.499891 | orchestrator | } 2026-03-09 00:02:25.499895 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.499899 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:25.499903 | orchestrator | } 2026-03-09 00:02:25.499907 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.499910 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:25.499914 | orchestrator | } 2026-03-09 00:02:25.499918 | orchestrator | 2026-03-09 00:02:25.499922 | orchestrator | + binding (known after apply) 2026-03-09 00:02:25.499926 | orchestrator | 2026-03-09 00:02:25.499929 | orchestrator | + fixed_ip { 2026-03-09 00:02:25.499933 | orchestrator | + ip_address = "192.168.16.11" 2026-03-09 00:02:25.499937 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.499941 | orchestrator | } 2026-03-09 00:02:25.499945 | orchestrator | } 2026-03-09 00:02:25.500071 | orchestrator | 2026-03-09 00:02:25.500078 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-09 00:02:25.500082 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:25.500085 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.500089 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:25.500093 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:25.500097 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.500104 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:25.500108 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:25.500112 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:25.500116 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.500122 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.500126 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:25.500130 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.500133 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.500137 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.500141 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.500145 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:25.500149 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.500153 | orchestrator | 2026-03-09 00:02:25.500156 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.500160 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:25.500164 | orchestrator | } 2026-03-09 00:02:25.500168 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.500172 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:25.500176 | orchestrator | } 2026-03-09 00:02:25.500180 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.500183 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:25.500187 | orchestrator | } 2026-03-09 00:02:25.500191 | orchestrator | 2026-03-09 00:02:25.500195 | orchestrator | + binding (known after apply) 2026-03-09 00:02:25.500199 | orchestrator | 2026-03-09 00:02:25.500202 | orchestrator | + fixed_ip { 2026-03-09 00:02:25.500206 | orchestrator | + ip_address = "192.168.16.12" 2026-03-09 00:02:25.500210 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.500214 | orchestrator | } 2026-03-09 00:02:25.500218 | orchestrator | } 2026-03-09 00:02:25.500340 | orchestrator | 2026-03-09 00:02:25.500347 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-09 00:02:25.500350 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:25.500354 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.500358 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:25.500362 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:25.500366 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.500370 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:25.500373 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:25.500377 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:25.500381 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.500385 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.500389 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:25.500392 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.500396 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.500400 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.500404 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.500408 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:25.500412 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.500415 | orchestrator | 2026-03-09 00:02:25.500419 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.500423 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:25.500427 | orchestrator | } 2026-03-09 00:02:25.500431 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.500435 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:25.500438 | orchestrator | } 2026-03-09 00:02:25.500442 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.500446 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:25.500450 | orchestrator | } 2026-03-09 00:02:25.500454 | orchestrator | 2026-03-09 00:02:25.500460 | orchestrator | + binding (known after apply) 2026-03-09 00:02:25.500464 | orchestrator | 2026-03-09 00:02:25.500468 | orchestrator | + fixed_ip { 2026-03-09 00:02:25.500472 | orchestrator | + ip_address = "192.168.16.13" 2026-03-09 00:02:25.500476 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.500479 | orchestrator | } 2026-03-09 00:02:25.500483 | orchestrator | } 2026-03-09 00:02:25.500588 | orchestrator | 2026-03-09 00:02:25.500594 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-09 00:02:25.500598 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:25.500602 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.500606 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:25.500610 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:25.500614 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.500618 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:25.500621 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:25.500625 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:25.500629 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.500633 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.500637 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:25.500640 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.500644 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.500648 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.500652 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.500656 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:25.500660 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.500664 | orchestrator | 2026-03-09 00:02:25.500668 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.500672 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:25.500676 | orchestrator | } 2026-03-09 00:02:25.500680 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.500683 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:25.500687 | orchestrator | } 2026-03-09 00:02:25.500691 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.500695 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:25.500699 | orchestrator | } 2026-03-09 00:02:25.500703 | orchestrator | 2026-03-09 00:02:25.500707 | orchestrator | + binding (known after apply) 2026-03-09 00:02:25.500711 | orchestrator | 2026-03-09 00:02:25.500714 | orchestrator | + fixed_ip { 2026-03-09 00:02:25.500718 | orchestrator | + ip_address = "192.168.16.14" 2026-03-09 00:02:25.500722 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.500726 | orchestrator | } 2026-03-09 00:02:25.500730 | orchestrator | } 2026-03-09 00:02:25.500864 | orchestrator | 2026-03-09 00:02:25.500871 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-09 00:02:25.500874 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:25.500878 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.500882 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:25.500886 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:25.500890 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.500894 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:25.500898 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:25.500901 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:25.500905 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:25.500909 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.500913 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:25.500917 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.500920 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:25.500924 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:25.500931 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.500935 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:25.500939 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.500943 | orchestrator | 2026-03-09 00:02:25.500947 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.500951 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:25.500955 | orchestrator | } 2026-03-09 00:02:25.500958 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.500962 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:25.500966 | orchestrator | } 2026-03-09 00:02:25.500970 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:25.500974 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:25.500978 | orchestrator | } 2026-03-09 00:02:25.500981 | orchestrator | 2026-03-09 00:02:25.500988 | orchestrator | + binding (known after apply) 2026-03-09 00:02:25.500992 | orchestrator | 2026-03-09 00:02:25.500996 | orchestrator | + fixed_ip { 2026-03-09 00:02:25.500999 | orchestrator | + ip_address = "192.168.16.15" 2026-03-09 00:02:25.501003 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.501007 | orchestrator | } 2026-03-09 00:02:25.501011 | orchestrator | } 2026-03-09 00:02:25.501016 | orchestrator | 2026-03-09 00:02:25.501020 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-09 00:02:25.501024 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-09 00:02:25.501028 | orchestrator | + force_destroy = false 2026-03-09 00:02:25.501032 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.501035 | orchestrator | + port_id = (known after apply) 2026-03-09 00:02:25.501039 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.501043 | orchestrator | + router_id = (known after apply) 2026-03-09 00:02:25.501047 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:25.501051 | orchestrator | } 2026-03-09 00:02:25.501075 | orchestrator | 2026-03-09 00:02:25.501080 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-09 00:02:25.501084 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-09 00:02:25.501088 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:25.501092 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.501095 | orchestrator | + availability_zone_hints = [ 2026-03-09 00:02:25.501099 | orchestrator | + "nova", 2026-03-09 00:02:25.501103 | orchestrator | ] 2026-03-09 00:02:25.501107 | orchestrator | + distributed = (known after apply) 2026-03-09 00:02:25.501111 | orchestrator | + enable_snat = (known after apply) 2026-03-09 00:02:25.501115 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-09 00:02:25.501119 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-09 00:02:25.501123 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.501126 | orchestrator | + name = "testbed" 2026-03-09 00:02:25.501130 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.501134 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.501138 | orchestrator | 2026-03-09 00:02:25.501142 | orchestrator | + external_fixed_ip (known after apply) 2026-03-09 00:02:25.501146 | orchestrator | } 2026-03-09 00:02:25.501162 | orchestrator | 2026-03-09 00:02:25.501167 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-09 00:02:25.501171 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-09 00:02:25.501175 | orchestrator | + description = "ssh" 2026-03-09 00:02:25.501179 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.501183 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.501187 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.501191 | orchestrator | + port_range_max = 22 2026-03-09 00:02:25.501194 | orchestrator | + port_range_min = 22 2026-03-09 00:02:25.501198 | orchestrator | + protocol = "tcp" 2026-03-09 00:02:25.501202 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.501209 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.501213 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.501217 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:25.501221 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.501224 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.501228 | orchestrator | } 2026-03-09 00:02:25.501245 | orchestrator | 2026-03-09 00:02:25.501250 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-09 00:02:25.501254 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-09 00:02:25.501258 | orchestrator | + description = "wireguard" 2026-03-09 00:02:25.501261 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.501265 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.501269 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.501273 | orchestrator | + port_range_max = 51820 2026-03-09 00:02:25.501277 | orchestrator | + port_range_min = 51820 2026-03-09 00:02:25.501281 | orchestrator | + protocol = "udp" 2026-03-09 00:02:25.501285 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.501288 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.501292 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.501296 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:25.501300 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.501304 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.501308 | orchestrator | } 2026-03-09 00:02:25.501313 | orchestrator | 2026-03-09 00:02:25.501317 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-09 00:02:25.501321 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-09 00:02:25.501325 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.501329 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.501333 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.501337 | orchestrator | + protocol = "tcp" 2026-03-09 00:02:25.501341 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.501344 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.501348 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.501352 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-09 00:02:25.501356 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.501360 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.501364 | orchestrator | } 2026-03-09 00:02:25.501369 | orchestrator | 2026-03-09 00:02:25.501373 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-09 00:02:25.501377 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-09 00:02:25.501381 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.501384 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.501388 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.501392 | orchestrator | + protocol = "udp" 2026-03-09 00:02:25.501396 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.501400 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.501403 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.501407 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-09 00:02:25.501411 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.501415 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.501419 | orchestrator | } 2026-03-09 00:02:25.501424 | orchestrator | 2026-03-09 00:02:25.501428 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-09 00:02:25.501435 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-09 00:02:25.501439 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.501443 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.501447 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.501450 | orchestrator | + protocol = "icmp" 2026-03-09 00:02:25.501454 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.501458 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.501462 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.501466 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:25.501469 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.501473 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.501477 | orchestrator | } 2026-03-09 00:02:25.501496 | orchestrator | 2026-03-09 00:02:25.501501 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-09 00:02:25.501504 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-09 00:02:25.501508 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.501512 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.501516 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.501520 | orchestrator | + protocol = "tcp" 2026-03-09 00:02:25.501524 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.501527 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.501534 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.501538 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:25.501542 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.501546 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.501550 | orchestrator | } 2026-03-09 00:02:25.501572 | orchestrator | 2026-03-09 00:02:25.501577 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-09 00:02:25.501580 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-09 00:02:25.501584 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.501588 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.501592 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.501596 | orchestrator | + protocol = "udp" 2026-03-09 00:02:25.501600 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.501604 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.501607 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.501611 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:25.501615 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.501619 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.501623 | orchestrator | } 2026-03-09 00:02:25.501676 | orchestrator | 2026-03-09 00:02:25.501680 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-09 00:02:25.501684 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-09 00:02:25.501688 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.501694 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.501698 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.501702 | orchestrator | + protocol = "icmp" 2026-03-09 00:02:25.501706 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.501710 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.501714 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.501717 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:25.501721 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.501725 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.501732 | orchestrator | } 2026-03-09 00:02:25.501801 | orchestrator | 2026-03-09 00:02:25.501807 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-09 00:02:25.501810 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-09 00:02:25.501814 | orchestrator | + description = "vrrp" 2026-03-09 00:02:25.501818 | orchestrator | + direction = "ingress" 2026-03-09 00:02:25.501822 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:25.501826 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.501830 | orchestrator | + protocol = "112" 2026-03-09 00:02:25.501833 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.501837 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:25.501841 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:25.501845 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:25.501849 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:25.501852 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.501856 | orchestrator | } 2026-03-09 00:02:25.501862 | orchestrator | 2026-03-09 00:02:25.501866 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-09 00:02:25.501869 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-09 00:02:25.501873 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.501877 | orchestrator | + description = "management security group" 2026-03-09 00:02:25.501881 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.501885 | orchestrator | + name = "testbed-management" 2026-03-09 00:02:25.501888 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.501892 | orchestrator | + stateful = (known after apply) 2026-03-09 00:02:25.501896 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.501900 | orchestrator | } 2026-03-09 00:02:25.501905 | orchestrator | 2026-03-09 00:02:25.501909 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-09 00:02:25.501913 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-09 00:02:25.501916 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.501920 | orchestrator | + description = "node security group" 2026-03-09 00:02:25.501924 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.501928 | orchestrator | + name = "testbed-node" 2026-03-09 00:02:25.501932 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.501936 | orchestrator | + stateful = (known after apply) 2026-03-09 00:02:25.501939 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.501943 | orchestrator | } 2026-03-09 00:02:25.502092 | orchestrator | 2026-03-09 00:02:25.502098 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-09 00:02:25.502102 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-09 00:02:25.502106 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:25.502109 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-09 00:02:25.502113 | orchestrator | + dns_nameservers = [ 2026-03-09 00:02:25.502117 | orchestrator | + "8.8.8.8", 2026-03-09 00:02:25.502121 | orchestrator | + "9.9.9.9", 2026-03-09 00:02:25.502125 | orchestrator | ] 2026-03-09 00:02:25.502129 | orchestrator | + enable_dhcp = true 2026-03-09 00:02:25.502133 | orchestrator | + gateway_ip = (known after apply) 2026-03-09 00:02:25.502137 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.502140 | orchestrator | + ip_version = 4 2026-03-09 00:02:25.502144 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-09 00:02:25.502148 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-09 00:02:25.502152 | orchestrator | + name = "subnet-testbed-management" 2026-03-09 00:02:25.502156 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:25.502159 | orchestrator | + no_gateway = false 2026-03-09 00:02:25.502163 | orchestrator | + region = (known after apply) 2026-03-09 00:02:25.502167 | orchestrator | + service_types = (known after apply) 2026-03-09 00:02:25.502174 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:25.502178 | orchestrator | 2026-03-09 00:02:25.502182 | orchestrator | + allocation_pool { 2026-03-09 00:02:25.502186 | orchestrator | + end = "192.168.31.250" 2026-03-09 00:02:25.502190 | orchestrator | + start = "192.168.31.200" 2026-03-09 00:02:25.502194 | orchestrator | } 2026-03-09 00:02:25.502198 | orchestrator | } 2026-03-09 00:02:25.502203 | orchestrator | 2026-03-09 00:02:25.502207 | orchestrator | # terraform_data.image will be created 2026-03-09 00:02:25.502211 | orchestrator | + resource "terraform_data" "image" { 2026-03-09 00:02:25.502215 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.502218 | orchestrator | + input = "Ubuntu 24.04" 2026-03-09 00:02:25.502222 | orchestrator | + output = (known after apply) 2026-03-09 00:02:25.502226 | orchestrator | } 2026-03-09 00:02:25.502230 | orchestrator | 2026-03-09 00:02:25.502234 | orchestrator | # terraform_data.image_node will be created 2026-03-09 00:02:25.502237 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-09 00:02:25.502241 | orchestrator | + id = (known after apply) 2026-03-09 00:02:25.502245 | orchestrator | + input = "Ubuntu 24.04" 2026-03-09 00:02:25.502249 | orchestrator | + output = (known after apply) 2026-03-09 00:02:25.502253 | orchestrator | } 2026-03-09 00:02:25.502257 | orchestrator | 2026-03-09 00:02:25.502260 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-09 00:02:25.502264 | orchestrator | 2026-03-09 00:02:25.502268 | orchestrator | Changes to Outputs: 2026-03-09 00:02:25.502272 | orchestrator | + manager_address = (sensitive value) 2026-03-09 00:02:25.502276 | orchestrator | + private_key = (sensitive value) 2026-03-09 00:02:25.615798 | orchestrator | terraform_data.image_node: Creating... 2026-03-09 00:02:25.616172 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=cb7e05bb-8fcc-ed54-9596-f622af22df3c] 2026-03-09 00:02:25.769494 | orchestrator | terraform_data.image: Creating... 2026-03-09 00:02:25.769579 | orchestrator | terraform_data.image: Creation complete after 0s [id=579b4725-c0fc-74f0-861e-4d00c72ca689] 2026-03-09 00:02:25.786404 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-09 00:02:25.787117 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-09 00:02:25.791416 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-09 00:02:25.795346 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-09 00:02:25.796602 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-09 00:02:25.797976 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-09 00:02:25.800538 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-09 00:02:25.804236 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-09 00:02:25.804386 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-09 00:02:25.808105 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-09 00:02:26.284875 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-09 00:02:26.291724 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-09 00:02:26.337707 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-09 00:02:26.345106 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-09 00:02:26.994056 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=f6edaac0-1ce0-4e4f-86db-2915c70f411b] 2026-03-09 00:02:26.998644 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-09 00:02:27.126208 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-09 00:02:27.137686 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-09 00:02:29.515681 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=320449d2-61ff-46fc-8f0d-ef8de6be542f] 2026-03-09 00:02:29.532983 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-09 00:02:29.537366 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=0231dd4c20fd7c1eda02aaf626e924581b5f3166] 2026-03-09 00:02:29.538391 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=771f98cb-74e3-479e-8ec9-00fdc11a8238] 2026-03-09 00:02:29.540331 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=d616dde6-c913-49b8-b8ef-90f7cc767ff0] 2026-03-09 00:02:29.547458 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-09 00:02:29.547492 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-09 00:02:29.548355 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-09 00:02:29.552674 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=fb37f328-fd68-494b-bcff-294494d86f6d] 2026-03-09 00:02:29.554779 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=093443b9bcafd26d53b38f1227ca8be776292b3f] 2026-03-09 00:02:29.558806 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-09 00:02:29.558950 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=7ad7d39e-c79f-49cf-9f83-32481f17a0bc] 2026-03-09 00:02:29.562139 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-09 00:02:29.566003 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-09 00:02:29.587857 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=bf4da7fe-59ae-42e8-92ff-fb55dbc42396] 2026-03-09 00:02:29.590615 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=17d99fae-d184-430d-aac6-01476d40e112] 2026-03-09 00:02:29.591418 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-09 00:02:29.598465 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-09 00:02:29.610806 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=51b9e2da-28ed-40a7-8c18-598646420d16] 2026-03-09 00:02:29.649502 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=741bb6ef-88fa-4baa-bfac-ed82f0dadf29] 2026-03-09 00:02:30.553298 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=afdd9e4a-5d42-484b-9e33-03bc151f69d3] 2026-03-09 00:02:30.638789 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=9c04dc27-5761-4167-b6c1-ce71d70195a5] 2026-03-09 00:02:30.646740 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-09 00:02:33.007448 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=524a18ac-4c70-47e5-adf9-4e22d62cf9be] 2026-03-09 00:02:33.024338 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=9f2fd835-a7d9-47f6-b03f-7ff6492b6850] 2026-03-09 00:02:33.099982 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=b540138f-352a-495b-ba9e-a53eac3537c3] 2026-03-09 00:02:33.127243 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=3bd37f1a-45e9-4691-b1ea-c721d1b654c6] 2026-03-09 00:02:33.141304 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=b742876e-d11b-4355-b37d-f52f169b3127] 2026-03-09 00:02:33.154299 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=b3868cf7-4a53-4299-a9f2-4f48ea5905a3] 2026-03-09 00:02:34.223373 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=364718c9-badb-44b9-8b1d-8dd0e58e5ae5] 2026-03-09 00:02:34.230109 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-09 00:02:34.230317 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-09 00:02:34.232083 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-09 00:02:34.450792 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=420e150b-b542-452a-9473-df3ac721a462] 2026-03-09 00:02:34.457750 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-09 00:02:34.458917 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-09 00:02:34.459948 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-09 00:02:34.462639 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-09 00:02:34.466485 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-09 00:02:34.470508 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-09 00:02:34.673939 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=77447b22-9221-4aff-9e50-41adda9775fb] 2026-03-09 00:02:34.855074 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=24b6cd15-fa47-4b6c-af2e-f03eec875a7a] 2026-03-09 00:02:35.041509 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=23462bc0-d4e0-45b3-b01b-df6b20f8956b] 2026-03-09 00:02:35.269841 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=f6443e8d-c946-4eed-b7a3-22ff2989e032] 2026-03-09 00:02:35.323317 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=4d7ba129-e427-4454-b7bf-7b35a2b4493a] 2026-03-09 00:02:35.338246 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-09 00:02:35.338650 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-09 00:02:35.340861 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-09 00:02:35.341233 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-09 00:02:35.342860 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-09 00:02:35.346918 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-09 00:02:35.347242 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-09 00:02:35.569984 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=746ac37d-c870-4345-ae19-4433e037f957] 2026-03-09 00:02:35.578135 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-09 00:02:35.792193 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=47d32d74-a88b-44d1-a0df-d024a822ef7c] 2026-03-09 00:02:35.803054 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-09 00:02:36.009385 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=b1cd46cc-8764-41a5-bf95-753d1e4f73e5] 2026-03-09 00:02:36.016353 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-09 00:02:36.373616 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=1c16227d-dd67-427f-90ed-a214996d2ee7] 2026-03-09 00:02:36.417140 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=76e6eefb-12d3-4870-9bc8-6c43ca312338] 2026-03-09 00:02:36.448992 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=febaea8c-f4b0-4029-a3a6-ae99d37d8871] 2026-03-09 00:02:36.538748 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=a4db1b3e-8767-44ae-acdc-8df25b22f73b] 2026-03-09 00:02:36.548660 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=a75a7e1d-7f40-40ac-bb32-2f834e20a2f4] 2026-03-09 00:02:36.811668 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=f3070049-3da0-416e-82d5-ecb72b74309f] 2026-03-09 00:02:36.817748 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=d7ed2248-cfc2-4eb8-a1e4-b48fd38423f5] 2026-03-09 00:02:37.103998 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=8124ab80-3a20-439a-a820-729b30e3914e] 2026-03-09 00:02:37.516528 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 4s [id=627bbf84-6393-40e2-80b4-1fde10278f4c] 2026-03-09 00:02:37.755127 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=09a99af7-47d6-4b4b-9443-1fa04f50ca8b] 2026-03-09 00:02:37.779372 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-09 00:02:37.785250 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-09 00:02:37.794187 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-09 00:02:37.794856 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-09 00:02:37.804271 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-09 00:02:37.811920 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-09 00:02:37.813227 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-09 00:02:40.141646 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=2a9a6a91-aae9-483a-be44-b7a4cff001ca] 2026-03-09 00:02:40.149001 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-09 00:02:40.155246 | orchestrator | local_file.inventory: Creating... 2026-03-09 00:02:40.157336 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-09 00:02:40.161239 | orchestrator | local_file.inventory: Creation complete after 0s [id=4da52cdd0a199ed556c564bc59f034ac2af98fdf] 2026-03-09 00:02:40.163422 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=1e7b00ebd19bd68204d6a1230a5890df64cb18d0] 2026-03-09 00:02:41.173190 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=2a9a6a91-aae9-483a-be44-b7a4cff001ca] 2026-03-09 00:02:47.788051 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-09 00:02:47.796374 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-09 00:02:47.799668 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-09 00:02:47.812128 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-09 00:02:47.816266 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-09 00:02:47.818728 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-09 00:02:57.796975 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-09 00:02:57.797098 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-09 00:02:57.800339 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-09 00:02:57.812656 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-09 00:02:57.816950 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-09 00:02:57.819157 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-09 00:03:07.806216 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-09 00:03:07.806363 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-09 00:03:07.806395 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-09 00:03:07.813635 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-09 00:03:07.817803 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-09 00:03:07.820033 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-09 00:03:08.732968 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=9d332275-6adc-4aae-b998-5a7b6a3d3129] 2026-03-09 00:03:17.814773 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-09 00:03:17.814918 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-09 00:03:17.814933 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-09 00:03:17.814943 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-09 00:03:17.818301 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-09 00:03:18.957796 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=04b908fb-ae28-4bf1-ad37-d027f79c4b14] 2026-03-09 00:03:27.823586 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-03-09 00:03:27.823712 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-03-09 00:03:27.823742 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-03-09 00:03:27.823754 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-03-09 00:03:28.810934 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 51s [id=48228c9e-53d7-42b1-ab17-2130c92e5eb1] 2026-03-09 00:03:29.474298 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=664818bf-a1ea-4c4d-a217-21c742f9aa5f] 2026-03-09 00:03:29.507297 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 52s [id=0bd2c26d-1a46-4c56-9069-a26467393e45] 2026-03-09 00:03:29.806719 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 52s [id=45ee694d-09b6-4181-9408-056766ad3404] 2026-03-09 00:03:29.838953 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-09 00:03:29.841323 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-09 00:03:29.841366 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-09 00:03:29.843050 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-09 00:03:29.844471 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4984760042728825406] 2026-03-09 00:03:29.845047 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-09 00:03:29.847170 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-09 00:03:29.853629 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-09 00:03:29.853679 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-09 00:03:29.864689 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-09 00:03:29.868146 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-09 00:03:29.886240 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-09 00:03:33.376293 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=48228c9e-53d7-42b1-ab17-2130c92e5eb1/17d99fae-d184-430d-aac6-01476d40e112] 2026-03-09 00:03:33.379588 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=45ee694d-09b6-4181-9408-056766ad3404/7ad7d39e-c79f-49cf-9f83-32481f17a0bc] 2026-03-09 00:03:33.410676 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=664818bf-a1ea-4c4d-a217-21c742f9aa5f/771f98cb-74e3-479e-8ec9-00fdc11a8238] 2026-03-09 00:03:33.412281 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=664818bf-a1ea-4c4d-a217-21c742f9aa5f/51b9e2da-28ed-40a7-8c18-598646420d16] 2026-03-09 00:03:33.447652 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=48228c9e-53d7-42b1-ab17-2130c92e5eb1/320449d2-61ff-46fc-8f0d-ef8de6be542f] 2026-03-09 00:03:33.476656 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=45ee694d-09b6-4181-9408-056766ad3404/d616dde6-c913-49b8-b8ef-90f7cc767ff0] 2026-03-09 00:03:39.554577 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=664818bf-a1ea-4c4d-a217-21c742f9aa5f/fb37f328-fd68-494b-bcff-294494d86f6d] 2026-03-09 00:03:39.573646 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=48228c9e-53d7-42b1-ab17-2130c92e5eb1/741bb6ef-88fa-4baa-bfac-ed82f0dadf29] 2026-03-09 00:03:39.605667 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=45ee694d-09b6-4181-9408-056766ad3404/bf4da7fe-59ae-42e8-92ff-fb55dbc42396] 2026-03-09 00:03:39.886957 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-09 00:03:49.888313 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-09 00:03:50.256100 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=633263d1-94f4-4d02-9e01-1bc853a6bfbb] 2026-03-09 00:03:50.299786 | orchestrator | 2026-03-09 00:03:50.301767 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-09 00:03:50.301811 | orchestrator | 2026-03-09 00:03:50.301836 | orchestrator | Outputs: 2026-03-09 00:03:50.301857 | orchestrator | 2026-03-09 00:03:50.301901 | orchestrator | manager_address = 2026-03-09 00:03:50.301918 | orchestrator | private_key = 2026-03-09 00:03:50.479348 | orchestrator | ok: Runtime: 0:01:34.045581 2026-03-09 00:03:50.518592 | 2026-03-09 00:03:50.518724 | TASK [Fetch manager address] 2026-03-09 00:03:51.065430 | orchestrator | ok 2026-03-09 00:03:51.076381 | 2026-03-09 00:03:51.076747 | TASK [Set manager_host address] 2026-03-09 00:03:51.165078 | orchestrator | ok 2026-03-09 00:03:51.175355 | 2026-03-09 00:03:51.175475 | LOOP [Update ansible collections] 2026-03-09 00:03:56.466139 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-09 00:03:56.466546 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-09 00:03:56.466609 | orchestrator | Starting galaxy collection install process 2026-03-09 00:03:56.466650 | orchestrator | Process install dependency map 2026-03-09 00:03:56.466686 | orchestrator | Starting collection install process 2026-03-09 00:03:56.466719 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2026-03-09 00:03:56.466814 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2026-03-09 00:03:56.466972 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-09 00:03:56.467057 | orchestrator | ok: Item: commons Runtime: 0:00:04.939986 2026-03-09 00:03:58.150963 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-09 00:03:58.151158 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-09 00:03:58.151222 | orchestrator | Starting galaxy collection install process 2026-03-09 00:03:58.151365 | orchestrator | Process install dependency map 2026-03-09 00:03:58.151420 | orchestrator | Starting collection install process 2026-03-09 00:03:58.151464 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2026-03-09 00:03:58.151508 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2026-03-09 00:03:58.151549 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-09 00:03:58.151615 | orchestrator | ok: Item: services Runtime: 0:00:01.414007 2026-03-09 00:03:58.176071 | 2026-03-09 00:03:58.176249 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-09 00:04:08.754527 | orchestrator | ok 2026-03-09 00:04:08.764541 | 2026-03-09 00:04:08.764667 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-09 00:05:08.815522 | orchestrator | ok 2026-03-09 00:05:08.825415 | 2026-03-09 00:05:08.825538 | TASK [Fetch manager ssh hostkey] 2026-03-09 00:05:10.410032 | orchestrator | Output suppressed because no_log was given 2026-03-09 00:05:10.426933 | 2026-03-09 00:05:10.427175 | TASK [Get ssh keypair from terraform environment] 2026-03-09 00:05:10.966666 | orchestrator | ok: Runtime: 0:00:00.005956 2026-03-09 00:05:10.989582 | 2026-03-09 00:05:10.989774 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-09 00:05:11.027265 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-09 00:05:11.035941 | 2026-03-09 00:05:11.036062 | TASK [Run manager part 0] 2026-03-09 00:05:12.432433 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-09 00:05:12.512835 | orchestrator | 2026-03-09 00:05:12.512887 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-09 00:05:12.512894 | orchestrator | 2026-03-09 00:05:12.512908 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-09 00:05:14.549609 | orchestrator | ok: [testbed-manager] 2026-03-09 00:05:14.549698 | orchestrator | 2026-03-09 00:05:14.549750 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-09 00:05:14.549773 | orchestrator | 2026-03-09 00:05:14.549795 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:05:16.714955 | orchestrator | ok: [testbed-manager] 2026-03-09 00:05:16.715149 | orchestrator | 2026-03-09 00:05:16.715166 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-09 00:05:17.439252 | orchestrator | ok: [testbed-manager] 2026-03-09 00:05:17.439321 | orchestrator | 2026-03-09 00:05:17.439333 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-09 00:05:17.490788 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:17.490872 | orchestrator | 2026-03-09 00:05:17.490889 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-09 00:05:17.527718 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:17.527785 | orchestrator | 2026-03-09 00:05:17.527796 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-09 00:05:17.572244 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:17.572319 | orchestrator | 2026-03-09 00:05:17.572330 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-09 00:05:17.608666 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:17.608781 | orchestrator | 2026-03-09 00:05:17.608807 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-09 00:05:17.655363 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:17.655433 | orchestrator | 2026-03-09 00:05:17.655444 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-09 00:05:17.692737 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:17.692851 | orchestrator | 2026-03-09 00:05:17.692866 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-09 00:05:17.739682 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:17.739780 | orchestrator | 2026-03-09 00:05:17.739803 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-09 00:05:18.511185 | orchestrator | changed: [testbed-manager] 2026-03-09 00:05:18.511296 | orchestrator | 2026-03-09 00:05:18.511315 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-09 00:08:11.597011 | orchestrator | changed: [testbed-manager] 2026-03-09 00:08:11.597086 | orchestrator | 2026-03-09 00:08:11.597100 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-09 00:09:49.256450 | orchestrator | changed: [testbed-manager] 2026-03-09 00:09:49.256553 | orchestrator | 2026-03-09 00:09:49.256571 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-09 00:10:09.961027 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:09.961113 | orchestrator | 2026-03-09 00:10:09.961130 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-09 00:10:18.803809 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:18.803988 | orchestrator | 2026-03-09 00:10:18.804009 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-09 00:10:18.850585 | orchestrator | ok: [testbed-manager] 2026-03-09 00:10:18.850658 | orchestrator | 2026-03-09 00:10:18.850673 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-09 00:10:19.651807 | orchestrator | ok: [testbed-manager] 2026-03-09 00:10:19.651851 | orchestrator | 2026-03-09 00:10:19.651861 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-09 00:10:20.394358 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:20.394405 | orchestrator | 2026-03-09 00:10:20.394415 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-09 00:10:27.063731 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:27.063819 | orchestrator | 2026-03-09 00:10:27.063863 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-09 00:10:33.163494 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:33.163693 | orchestrator | 2026-03-09 00:10:33.163719 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-09 00:10:36.000072 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:36.000137 | orchestrator | 2026-03-09 00:10:36.000147 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-09 00:10:37.781079 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:37.781173 | orchestrator | 2026-03-09 00:10:37.781196 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-09 00:10:38.925236 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-09 00:10:38.925365 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-09 00:10:38.925392 | orchestrator | 2026-03-09 00:10:38.925410 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-09 00:10:38.970711 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-09 00:10:38.970802 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-09 00:10:38.970819 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-09 00:10:38.970832 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-09 00:10:51.504274 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-09 00:10:51.504424 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-09 00:10:51.504439 | orchestrator | 2026-03-09 00:10:51.504449 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-09 00:10:52.093075 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:52.093140 | orchestrator | 2026-03-09 00:10:52.093151 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-09 00:14:12.833525 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-09 00:14:12.833639 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-09 00:14:12.833659 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-09 00:14:12.833672 | orchestrator | 2026-03-09 00:14:12.833685 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-09 00:14:15.300337 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-09 00:14:15.300431 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-09 00:14:15.300447 | orchestrator | 2026-03-09 00:14:15.300514 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-09 00:14:15.300534 | orchestrator | 2026-03-09 00:14:15.300551 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:14:16.817199 | orchestrator | ok: [testbed-manager] 2026-03-09 00:14:16.817273 | orchestrator | 2026-03-09 00:14:16.817291 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-09 00:14:16.860983 | orchestrator | ok: [testbed-manager] 2026-03-09 00:14:16.861037 | orchestrator | 2026-03-09 00:14:16.861045 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-09 00:14:16.923882 | orchestrator | ok: [testbed-manager] 2026-03-09 00:14:16.923976 | orchestrator | 2026-03-09 00:14:16.923993 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-09 00:14:17.755194 | orchestrator | changed: [testbed-manager] 2026-03-09 00:14:17.755287 | orchestrator | 2026-03-09 00:14:17.755305 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-09 00:14:18.521796 | orchestrator | changed: [testbed-manager] 2026-03-09 00:14:18.521909 | orchestrator | 2026-03-09 00:14:18.521934 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-09 00:14:19.910985 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-09 00:14:19.911037 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-09 00:14:19.911046 | orchestrator | 2026-03-09 00:14:19.911061 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-09 00:14:21.362096 | orchestrator | changed: [testbed-manager] 2026-03-09 00:14:21.362165 | orchestrator | 2026-03-09 00:14:21.362172 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-09 00:14:23.112103 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:14:23.112989 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-09 00:14:23.113017 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:14:23.113025 | orchestrator | 2026-03-09 00:14:23.113032 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-09 00:14:23.173655 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:14:23.173716 | orchestrator | 2026-03-09 00:14:23.173726 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-09 00:14:23.243238 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:14:23.243296 | orchestrator | 2026-03-09 00:14:23.243302 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-09 00:14:23.825245 | orchestrator | changed: [testbed-manager] 2026-03-09 00:14:23.825335 | orchestrator | 2026-03-09 00:14:23.825357 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-09 00:14:23.934521 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:14:23.934579 | orchestrator | 2026-03-09 00:14:23.934588 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-09 00:14:24.923232 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:14:24.923320 | orchestrator | changed: [testbed-manager] 2026-03-09 00:14:24.923336 | orchestrator | 2026-03-09 00:14:24.923349 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-09 00:14:24.954444 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:14:24.954526 | orchestrator | 2026-03-09 00:14:24.954535 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-09 00:14:24.979849 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:14:24.979913 | orchestrator | 2026-03-09 00:14:24.979923 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-09 00:14:25.008802 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:14:25.008865 | orchestrator | 2026-03-09 00:14:25.008875 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-09 00:14:25.072403 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:14:25.072539 | orchestrator | 2026-03-09 00:14:25.072558 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-09 00:14:25.785548 | orchestrator | ok: [testbed-manager] 2026-03-09 00:14:25.785638 | orchestrator | 2026-03-09 00:14:25.785654 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-09 00:14:25.785667 | orchestrator | 2026-03-09 00:14:25.785679 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:14:27.206733 | orchestrator | ok: [testbed-manager] 2026-03-09 00:14:27.206787 | orchestrator | 2026-03-09 00:14:27.206796 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-09 00:14:28.163792 | orchestrator | changed: [testbed-manager] 2026-03-09 00:14:28.164522 | orchestrator | 2026-03-09 00:14:28.164568 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:14:28.164590 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-09 00:14:28.164610 | orchestrator | 2026-03-09 00:14:28.428793 | orchestrator | ok: Runtime: 0:09:16.955295 2026-03-09 00:14:28.447745 | 2026-03-09 00:14:28.447884 | TASK [Point out that the log in on the manager is now possible] 2026-03-09 00:14:28.486446 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-09 00:14:28.496666 | 2026-03-09 00:14:28.496789 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-09 00:14:28.538425 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-09 00:14:28.548538 | 2026-03-09 00:14:28.548672 | TASK [Run manager part 1 + 2] 2026-03-09 00:14:29.368736 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-09 00:14:29.422748 | orchestrator | 2026-03-09 00:14:29.422849 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-09 00:14:29.422870 | orchestrator | 2026-03-09 00:14:29.422902 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:14:32.385760 | orchestrator | ok: [testbed-manager] 2026-03-09 00:14:32.385853 | orchestrator | 2026-03-09 00:14:32.385907 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-09 00:14:32.428229 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:14:32.428319 | orchestrator | 2026-03-09 00:14:32.428342 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-09 00:14:32.474009 | orchestrator | ok: [testbed-manager] 2026-03-09 00:14:32.474118 | orchestrator | 2026-03-09 00:14:32.474135 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-09 00:14:32.515684 | orchestrator | ok: [testbed-manager] 2026-03-09 00:14:32.515779 | orchestrator | 2026-03-09 00:14:32.515798 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-09 00:14:32.583778 | orchestrator | ok: [testbed-manager] 2026-03-09 00:14:32.583876 | orchestrator | 2026-03-09 00:14:32.583896 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-09 00:14:32.651375 | orchestrator | ok: [testbed-manager] 2026-03-09 00:14:32.651442 | orchestrator | 2026-03-09 00:14:32.651455 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-09 00:14:32.698329 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-09 00:14:32.698413 | orchestrator | 2026-03-09 00:14:32.698429 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-09 00:14:33.471714 | orchestrator | ok: [testbed-manager] 2026-03-09 00:14:33.472245 | orchestrator | 2026-03-09 00:14:33.472283 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-09 00:14:33.515637 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:14:33.515735 | orchestrator | 2026-03-09 00:14:33.515759 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-09 00:14:34.977157 | orchestrator | changed: [testbed-manager] 2026-03-09 00:14:34.977252 | orchestrator | 2026-03-09 00:14:34.977274 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-09 00:14:35.602548 | orchestrator | ok: [testbed-manager] 2026-03-09 00:14:35.602633 | orchestrator | 2026-03-09 00:14:35.602649 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-09 00:14:36.859902 | orchestrator | changed: [testbed-manager] 2026-03-09 00:14:36.859986 | orchestrator | 2026-03-09 00:14:36.860005 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-09 00:14:52.841242 | orchestrator | changed: [testbed-manager] 2026-03-09 00:14:52.841314 | orchestrator | 2026-03-09 00:14:52.841329 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-09 00:14:53.572318 | orchestrator | ok: [testbed-manager] 2026-03-09 00:14:53.572422 | orchestrator | 2026-03-09 00:14:53.572442 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-09 00:14:53.626990 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:14:53.627080 | orchestrator | 2026-03-09 00:14:53.627097 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-09 00:14:54.669047 | orchestrator | changed: [testbed-manager] 2026-03-09 00:14:54.669119 | orchestrator | 2026-03-09 00:14:54.669129 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-09 00:14:55.681328 | orchestrator | changed: [testbed-manager] 2026-03-09 00:14:55.681578 | orchestrator | 2026-03-09 00:14:55.681600 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-09 00:14:56.277917 | orchestrator | changed: [testbed-manager] 2026-03-09 00:14:56.278008 | orchestrator | 2026-03-09 00:14:56.278057 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-09 00:14:56.325234 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-09 00:14:56.325306 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-09 00:14:56.325313 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-09 00:14:56.325319 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-09 00:15:00.863761 | orchestrator | changed: [testbed-manager] 2026-03-09 00:15:00.863839 | orchestrator | 2026-03-09 00:15:00.863852 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-09 00:15:10.148705 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-09 00:15:10.148735 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-09 00:15:10.148741 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-09 00:15:10.148746 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-09 00:15:10.148753 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-09 00:15:10.148757 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-09 00:15:10.148761 | orchestrator | 2026-03-09 00:15:10.148765 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-09 00:15:11.277688 | orchestrator | changed: [testbed-manager] 2026-03-09 00:15:11.277838 | orchestrator | 2026-03-09 00:15:11.277850 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-09 00:15:11.322461 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:15:11.322507 | orchestrator | 2026-03-09 00:15:11.322515 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-09 00:15:14.434915 | orchestrator | changed: [testbed-manager] 2026-03-09 00:15:14.435037 | orchestrator | 2026-03-09 00:15:14.435053 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-09 00:15:14.472805 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:15:14.472881 | orchestrator | 2026-03-09 00:15:14.472895 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-09 00:17:03.699003 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:03.699042 | orchestrator | 2026-03-09 00:17:03.699050 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-09 00:17:05.204909 | orchestrator | ok: [testbed-manager] 2026-03-09 00:17:05.204952 | orchestrator | 2026-03-09 00:17:05.204959 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:17:05.204966 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-09 00:17:05.204971 | orchestrator | 2026-03-09 00:17:05.690400 | orchestrator | ok: Runtime: 0:02:36.462896 2026-03-09 00:17:05.708534 | 2026-03-09 00:17:05.708695 | TASK [Reboot manager] 2026-03-09 00:17:07.249172 | orchestrator | ok: Runtime: 0:00:01.000442 2026-03-09 00:17:07.269487 | 2026-03-09 00:17:07.269629 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-09 00:17:23.817930 | orchestrator | ok 2026-03-09 00:17:23.828494 | 2026-03-09 00:17:23.828636 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-09 00:18:23.876873 | orchestrator | ok 2026-03-09 00:18:23.887364 | 2026-03-09 00:18:23.887511 | TASK [Deploy manager + bootstrap nodes] 2026-03-09 00:18:26.620989 | orchestrator | 2026-03-09 00:18:26.621179 | orchestrator | # DEPLOY MANAGER 2026-03-09 00:18:26.621203 | orchestrator | 2026-03-09 00:18:26.621217 | orchestrator | + set -e 2026-03-09 00:18:26.621230 | orchestrator | + echo 2026-03-09 00:18:26.621244 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-09 00:18:26.621261 | orchestrator | + echo 2026-03-09 00:18:26.621312 | orchestrator | + cat /opt/manager-vars.sh 2026-03-09 00:18:26.623809 | orchestrator | export NUMBER_OF_NODES=6 2026-03-09 00:18:26.623854 | orchestrator | 2026-03-09 00:18:26.623868 | orchestrator | export CEPH_VERSION=reef 2026-03-09 00:18:26.623882 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-09 00:18:26.623895 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-09 00:18:26.623918 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-09 00:18:26.623929 | orchestrator | 2026-03-09 00:18:26.623947 | orchestrator | export ARA=false 2026-03-09 00:18:26.623958 | orchestrator | export DEPLOY_MODE=manager 2026-03-09 00:18:26.623976 | orchestrator | export TEMPEST=true 2026-03-09 00:18:26.623987 | orchestrator | export IS_ZUUL=true 2026-03-09 00:18:26.623998 | orchestrator | 2026-03-09 00:18:26.624016 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.70 2026-03-09 00:18:26.624028 | orchestrator | export EXTERNAL_API=false 2026-03-09 00:18:26.624039 | orchestrator | 2026-03-09 00:18:26.624050 | orchestrator | export IMAGE_USER=ubuntu 2026-03-09 00:18:26.624065 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-09 00:18:26.624075 | orchestrator | 2026-03-09 00:18:26.624086 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-09 00:18:26.624104 | orchestrator | 2026-03-09 00:18:26.624116 | orchestrator | + echo 2026-03-09 00:18:26.624128 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-09 00:18:26.624693 | orchestrator | ++ export INTERACTIVE=false 2026-03-09 00:18:26.624712 | orchestrator | ++ INTERACTIVE=false 2026-03-09 00:18:26.624732 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-09 00:18:26.624752 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-09 00:18:26.624771 | orchestrator | + source /opt/manager-vars.sh 2026-03-09 00:18:26.624789 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-09 00:18:26.624806 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-09 00:18:26.624830 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-09 00:18:26.624850 | orchestrator | ++ CEPH_VERSION=reef 2026-03-09 00:18:26.624870 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-09 00:18:26.624890 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-09 00:18:26.624909 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-09 00:18:26.624927 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-09 00:18:26.624939 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-09 00:18:26.624960 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-09 00:18:26.624971 | orchestrator | ++ export ARA=false 2026-03-09 00:18:26.624982 | orchestrator | ++ ARA=false 2026-03-09 00:18:26.624993 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-09 00:18:26.625003 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-09 00:18:26.625014 | orchestrator | ++ export TEMPEST=true 2026-03-09 00:18:26.625024 | orchestrator | ++ TEMPEST=true 2026-03-09 00:18:26.625035 | orchestrator | ++ export IS_ZUUL=true 2026-03-09 00:18:26.625046 | orchestrator | ++ IS_ZUUL=true 2026-03-09 00:18:26.625056 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.70 2026-03-09 00:18:26.625067 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.70 2026-03-09 00:18:26.625083 | orchestrator | ++ export EXTERNAL_API=false 2026-03-09 00:18:26.625094 | orchestrator | ++ EXTERNAL_API=false 2026-03-09 00:18:26.625105 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-09 00:18:26.625115 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-09 00:18:26.625126 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-09 00:18:26.625137 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-09 00:18:26.625148 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-09 00:18:26.625158 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-09 00:18:26.625169 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-09 00:18:26.680129 | orchestrator | + docker version 2026-03-09 00:18:26.777227 | orchestrator | Client: Docker Engine - Community 2026-03-09 00:18:26.777332 | orchestrator | Version: 27.5.1 2026-03-09 00:18:26.777350 | orchestrator | API version: 1.47 2026-03-09 00:18:26.777367 | orchestrator | Go version: go1.22.11 2026-03-09 00:18:26.777380 | orchestrator | Git commit: 9f9e405 2026-03-09 00:18:26.777392 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-09 00:18:26.777404 | orchestrator | OS/Arch: linux/amd64 2026-03-09 00:18:26.777414 | orchestrator | Context: default 2026-03-09 00:18:26.777425 | orchestrator | 2026-03-09 00:18:26.777437 | orchestrator | Server: Docker Engine - Community 2026-03-09 00:18:26.777448 | orchestrator | Engine: 2026-03-09 00:18:26.777459 | orchestrator | Version: 27.5.1 2026-03-09 00:18:26.777470 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-09 00:18:26.777511 | orchestrator | Go version: go1.22.11 2026-03-09 00:18:26.777523 | orchestrator | Git commit: 4c9b3b0 2026-03-09 00:18:26.777533 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-09 00:18:26.777544 | orchestrator | OS/Arch: linux/amd64 2026-03-09 00:18:26.777555 | orchestrator | Experimental: false 2026-03-09 00:18:26.777566 | orchestrator | containerd: 2026-03-09 00:18:26.777619 | orchestrator | Version: v2.2.1 2026-03-09 00:18:26.777632 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-09 00:18:26.777644 | orchestrator | runc: 2026-03-09 00:18:26.777654 | orchestrator | Version: 1.3.4 2026-03-09 00:18:26.777665 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-09 00:18:26.777676 | orchestrator | docker-init: 2026-03-09 00:18:26.777687 | orchestrator | Version: 0.19.0 2026-03-09 00:18:26.777698 | orchestrator | GitCommit: de40ad0 2026-03-09 00:18:26.779000 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-09 00:18:26.786110 | orchestrator | + set -e 2026-03-09 00:18:26.786192 | orchestrator | + source /opt/manager-vars.sh 2026-03-09 00:18:26.786209 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-09 00:18:26.786224 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-09 00:18:26.786243 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-09 00:18:26.786260 | orchestrator | ++ CEPH_VERSION=reef 2026-03-09 00:18:26.786279 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-09 00:18:26.786299 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-09 00:18:26.786317 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-09 00:18:26.786336 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-09 00:18:26.786356 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-09 00:18:26.786375 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-09 00:18:26.786392 | orchestrator | ++ export ARA=false 2026-03-09 00:18:26.786408 | orchestrator | ++ ARA=false 2026-03-09 00:18:26.786419 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-09 00:18:26.786430 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-09 00:18:26.786441 | orchestrator | ++ export TEMPEST=true 2026-03-09 00:18:26.786451 | orchestrator | ++ TEMPEST=true 2026-03-09 00:18:26.786462 | orchestrator | ++ export IS_ZUUL=true 2026-03-09 00:18:26.786472 | orchestrator | ++ IS_ZUUL=true 2026-03-09 00:18:26.786484 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.70 2026-03-09 00:18:26.786495 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.70 2026-03-09 00:18:26.786505 | orchestrator | ++ export EXTERNAL_API=false 2026-03-09 00:18:26.786516 | orchestrator | ++ EXTERNAL_API=false 2026-03-09 00:18:26.786527 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-09 00:18:26.786537 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-09 00:18:26.786548 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-09 00:18:26.786558 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-09 00:18:26.786570 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-09 00:18:26.786617 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-09 00:18:26.786637 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-09 00:18:26.786648 | orchestrator | ++ export INTERACTIVE=false 2026-03-09 00:18:26.786659 | orchestrator | ++ INTERACTIVE=false 2026-03-09 00:18:26.786670 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-09 00:18:26.786685 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-09 00:18:26.786696 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-09 00:18:26.786707 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-09 00:18:26.792387 | orchestrator | + set -e 2026-03-09 00:18:26.792424 | orchestrator | + VERSION=9.5.0 2026-03-09 00:18:26.792438 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-09 00:18:26.798079 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-09 00:18:26.798139 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-09 00:18:26.800067 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-09 00:18:26.804856 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-09 00:18:26.813423 | orchestrator | + set -e 2026-03-09 00:18:26.813510 | orchestrator | /opt/configuration ~ 2026-03-09 00:18:26.813530 | orchestrator | + pushd /opt/configuration 2026-03-09 00:18:26.813543 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-09 00:18:26.815570 | orchestrator | + source /opt/venv/bin/activate 2026-03-09 00:18:26.817682 | orchestrator | ++ deactivate nondestructive 2026-03-09 00:18:26.817739 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:18:26.817755 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:18:26.817808 | orchestrator | ++ hash -r 2026-03-09 00:18:26.817833 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:18:26.817844 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-09 00:18:26.817855 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-09 00:18:26.817867 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-09 00:18:26.817879 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-09 00:18:26.817890 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-09 00:18:26.817901 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-09 00:18:26.817912 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-09 00:18:26.817924 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-09 00:18:26.817936 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-09 00:18:26.817947 | orchestrator | ++ export PATH 2026-03-09 00:18:26.817958 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:18:26.817969 | orchestrator | ++ '[' -z '' ']' 2026-03-09 00:18:26.817980 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-09 00:18:26.817990 | orchestrator | ++ PS1='(venv) ' 2026-03-09 00:18:26.818001 | orchestrator | ++ export PS1 2026-03-09 00:18:26.818012 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-09 00:18:26.818075 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-09 00:18:26.818086 | orchestrator | ++ hash -r 2026-03-09 00:18:26.818097 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-09 00:18:28.127446 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-09 00:18:28.128269 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-09 00:18:28.129929 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-09 00:18:28.131150 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-09 00:18:28.132434 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-09 00:18:28.142544 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-09 00:18:28.143950 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-09 00:18:28.145189 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-09 00:18:28.146453 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-09 00:18:28.181463 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.5) 2026-03-09 00:18:28.182708 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-09 00:18:28.184777 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-09 00:18:28.186318 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-09 00:18:28.190173 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-09 00:18:28.412781 | orchestrator | ++ which gilt 2026-03-09 00:18:28.416863 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-09 00:18:28.416901 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-09 00:18:28.682996 | orchestrator | osism.cfg-generics: 2026-03-09 00:18:28.804317 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-09 00:18:28.804425 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-09 00:18:28.804440 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-09 00:18:28.804473 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-09 00:18:29.692193 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-09 00:18:29.706263 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-09 00:18:30.163832 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-09 00:18:30.220091 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-09 00:18:30.220202 | orchestrator | + deactivate 2026-03-09 00:18:30.220219 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-09 00:18:30.220233 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-09 00:18:30.220245 | orchestrator | + export PATH 2026-03-09 00:18:30.220256 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-09 00:18:30.220269 | orchestrator | + '[' -n '' ']' 2026-03-09 00:18:30.220282 | orchestrator | + hash -r 2026-03-09 00:18:30.220293 | orchestrator | + '[' -n '' ']' 2026-03-09 00:18:30.220304 | orchestrator | + unset VIRTUAL_ENV 2026-03-09 00:18:30.220315 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-09 00:18:30.220326 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-09 00:18:30.220350 | orchestrator | + unset -f deactivate 2026-03-09 00:18:30.220362 | orchestrator | + popd 2026-03-09 00:18:30.220535 | orchestrator | ~ 2026-03-09 00:18:30.222298 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-09 00:18:30.222321 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-09 00:18:30.222767 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-09 00:18:30.275033 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-09 00:18:30.275136 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-09 00:18:30.275684 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-09 00:18:30.337218 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-09 00:18:30.337964 | orchestrator | ++ semver 2024.2 2025.1 2026-03-09 00:18:30.405175 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-09 00:18:30.405274 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-09 00:18:30.518074 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-09 00:18:30.518176 | orchestrator | + source /opt/venv/bin/activate 2026-03-09 00:18:30.518191 | orchestrator | ++ deactivate nondestructive 2026-03-09 00:18:30.518203 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:18:30.518213 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:18:30.518223 | orchestrator | ++ hash -r 2026-03-09 00:18:30.518234 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:18:30.518243 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-09 00:18:30.518253 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-09 00:18:30.518275 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-09 00:18:30.518287 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-09 00:18:30.518297 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-09 00:18:30.518308 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-09 00:18:30.518318 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-09 00:18:30.518328 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-09 00:18:30.518364 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-09 00:18:30.518375 | orchestrator | ++ export PATH 2026-03-09 00:18:30.518615 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:18:30.518633 | orchestrator | ++ '[' -z '' ']' 2026-03-09 00:18:30.518643 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-09 00:18:30.518653 | orchestrator | ++ PS1='(venv) ' 2026-03-09 00:18:30.518662 | orchestrator | ++ export PS1 2026-03-09 00:18:30.518672 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-09 00:18:30.518715 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-09 00:18:30.518727 | orchestrator | ++ hash -r 2026-03-09 00:18:30.518921 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-09 00:18:31.777160 | orchestrator | 2026-03-09 00:18:31.777278 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-09 00:18:31.777305 | orchestrator | 2026-03-09 00:18:31.777326 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-09 00:18:32.366766 | orchestrator | ok: [testbed-manager] 2026-03-09 00:18:32.366883 | orchestrator | 2026-03-09 00:18:32.366915 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-09 00:18:33.383190 | orchestrator | changed: [testbed-manager] 2026-03-09 00:18:33.383306 | orchestrator | 2026-03-09 00:18:33.383317 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-09 00:18:33.383346 | orchestrator | 2026-03-09 00:18:33.383353 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:18:35.815271 | orchestrator | ok: [testbed-manager] 2026-03-09 00:18:35.815381 | orchestrator | 2026-03-09 00:18:35.815398 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-09 00:18:35.871811 | orchestrator | ok: [testbed-manager] 2026-03-09 00:18:35.871907 | orchestrator | 2026-03-09 00:18:35.871924 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-09 00:18:36.365014 | orchestrator | changed: [testbed-manager] 2026-03-09 00:18:36.365127 | orchestrator | 2026-03-09 00:18:36.365142 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-09 00:18:36.411373 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:18:36.411493 | orchestrator | 2026-03-09 00:18:36.411520 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-09 00:18:36.776179 | orchestrator | changed: [testbed-manager] 2026-03-09 00:18:36.776288 | orchestrator | 2026-03-09 00:18:36.776300 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-09 00:18:37.106154 | orchestrator | ok: [testbed-manager] 2026-03-09 00:18:37.106241 | orchestrator | 2026-03-09 00:18:37.106254 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-09 00:18:37.243089 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:18:37.243190 | orchestrator | 2026-03-09 00:18:37.243214 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-09 00:18:37.243227 | orchestrator | 2026-03-09 00:18:37.243239 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:18:39.024509 | orchestrator | ok: [testbed-manager] 2026-03-09 00:18:39.024581 | orchestrator | 2026-03-09 00:18:39.024636 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-09 00:18:39.135694 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-09 00:18:39.135762 | orchestrator | 2026-03-09 00:18:39.135769 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-09 00:18:39.193026 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-09 00:18:39.193103 | orchestrator | 2026-03-09 00:18:39.193113 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-09 00:18:40.322119 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-09 00:18:40.322224 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-09 00:18:40.322239 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-09 00:18:40.322251 | orchestrator | 2026-03-09 00:18:40.322265 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-09 00:18:42.241738 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-09 00:18:42.241862 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-09 00:18:42.241885 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-09 00:18:42.241902 | orchestrator | 2026-03-09 00:18:42.241921 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-09 00:18:42.897476 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:18:42.897565 | orchestrator | changed: [testbed-manager] 2026-03-09 00:18:42.897579 | orchestrator | 2026-03-09 00:18:42.897620 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-09 00:18:43.619834 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:18:43.619941 | orchestrator | changed: [testbed-manager] 2026-03-09 00:18:43.619967 | orchestrator | 2026-03-09 00:18:43.619990 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-09 00:18:43.680519 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:18:43.680641 | orchestrator | 2026-03-09 00:18:43.680658 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-09 00:18:44.064975 | orchestrator | ok: [testbed-manager] 2026-03-09 00:18:44.065060 | orchestrator | 2026-03-09 00:18:44.065072 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-09 00:18:44.145505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-09 00:18:44.145672 | orchestrator | 2026-03-09 00:18:44.145699 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-09 00:18:45.298529 | orchestrator | changed: [testbed-manager] 2026-03-09 00:18:45.298684 | orchestrator | 2026-03-09 00:18:45.298703 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-09 00:18:46.219572 | orchestrator | changed: [testbed-manager] 2026-03-09 00:18:46.219782 | orchestrator | 2026-03-09 00:18:46.219807 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-09 00:18:57.915566 | orchestrator | changed: [testbed-manager] 2026-03-09 00:18:57.915713 | orchestrator | 2026-03-09 00:18:57.915725 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-09 00:18:57.987691 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:18:57.987768 | orchestrator | 2026-03-09 00:18:57.987795 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-09 00:18:57.987804 | orchestrator | 2026-03-09 00:18:57.987810 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:19:00.073044 | orchestrator | ok: [testbed-manager] 2026-03-09 00:19:00.073152 | orchestrator | 2026-03-09 00:19:00.073173 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-09 00:19:00.199113 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-09 00:19:00.199226 | orchestrator | 2026-03-09 00:19:00.199244 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-09 00:19:00.261497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:19:00.261680 | orchestrator | 2026-03-09 00:19:00.261711 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-09 00:19:03.053488 | orchestrator | ok: [testbed-manager] 2026-03-09 00:19:03.053643 | orchestrator | 2026-03-09 00:19:03.054176 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-09 00:19:03.107369 | orchestrator | ok: [testbed-manager] 2026-03-09 00:19:03.107457 | orchestrator | 2026-03-09 00:19:03.107475 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-09 00:19:03.228972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-09 00:19:03.229032 | orchestrator | 2026-03-09 00:19:03.229043 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-09 00:19:05.810224 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-09 00:19:05.810294 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-09 00:19:05.810304 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-09 00:19:05.810313 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-09 00:19:05.810320 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-09 00:19:05.810328 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-09 00:19:05.810335 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-09 00:19:05.810342 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-09 00:19:05.810350 | orchestrator | 2026-03-09 00:19:05.810358 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-09 00:19:06.391853 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:06.391920 | orchestrator | 2026-03-09 00:19:06.391933 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-09 00:19:06.986921 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:06.987007 | orchestrator | 2026-03-09 00:19:06.987025 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-09 00:19:07.060923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-09 00:19:07.061020 | orchestrator | 2026-03-09 00:19:07.061036 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-09 00:19:08.338469 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-09 00:19:08.338574 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-09 00:19:08.338585 | orchestrator | 2026-03-09 00:19:08.338594 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-09 00:19:09.028644 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:09.028749 | orchestrator | 2026-03-09 00:19:09.028781 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-09 00:19:09.093989 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:19:09.094119 | orchestrator | 2026-03-09 00:19:09.094132 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-09 00:19:09.180982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-09 00:19:09.181070 | orchestrator | 2026-03-09 00:19:09.181083 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-09 00:19:09.843659 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:09.843758 | orchestrator | 2026-03-09 00:19:09.843773 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-09 00:19:09.914310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-09 00:19:09.914406 | orchestrator | 2026-03-09 00:19:09.914422 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-09 00:19:11.354922 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:19:11.355051 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:19:11.355068 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:11.355082 | orchestrator | 2026-03-09 00:19:11.355095 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-09 00:19:12.004731 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:12.004830 | orchestrator | 2026-03-09 00:19:12.004848 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-09 00:19:12.069702 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:19:12.069795 | orchestrator | 2026-03-09 00:19:12.069809 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-09 00:19:12.173197 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-09 00:19:12.173300 | orchestrator | 2026-03-09 00:19:12.173324 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-09 00:19:12.684521 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:12.684632 | orchestrator | 2026-03-09 00:19:12.684651 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-09 00:19:13.085694 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:13.085781 | orchestrator | 2026-03-09 00:19:13.085797 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-09 00:19:14.218125 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-09 00:19:14.218199 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-09 00:19:14.218209 | orchestrator | 2026-03-09 00:19:14.218217 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-09 00:19:14.824236 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:14.824325 | orchestrator | 2026-03-09 00:19:14.824341 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-09 00:19:15.168675 | orchestrator | ok: [testbed-manager] 2026-03-09 00:19:15.168763 | orchestrator | 2026-03-09 00:19:15.168781 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-09 00:19:15.523588 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:15.523699 | orchestrator | 2026-03-09 00:19:15.523713 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-09 00:19:15.575842 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:19:15.575939 | orchestrator | 2026-03-09 00:19:15.575960 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-09 00:19:15.651711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-09 00:19:15.651817 | orchestrator | 2026-03-09 00:19:15.651832 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-09 00:19:15.691075 | orchestrator | ok: [testbed-manager] 2026-03-09 00:19:15.691155 | orchestrator | 2026-03-09 00:19:15.691169 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-09 00:19:17.590759 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-09 00:19:17.590847 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-09 00:19:17.590863 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-09 00:19:17.590875 | orchestrator | 2026-03-09 00:19:17.590888 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-09 00:19:18.252434 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:18.252521 | orchestrator | 2026-03-09 00:19:18.252536 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-09 00:19:18.873722 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:18.873772 | orchestrator | 2026-03-09 00:19:18.873778 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-09 00:19:19.543989 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:19.544089 | orchestrator | 2026-03-09 00:19:19.544107 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-09 00:19:19.604582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-09 00:19:19.604750 | orchestrator | 2026-03-09 00:19:19.604780 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-09 00:19:19.645960 | orchestrator | ok: [testbed-manager] 2026-03-09 00:19:19.646069 | orchestrator | 2026-03-09 00:19:19.646083 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-09 00:19:20.257152 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-09 00:19:20.257250 | orchestrator | 2026-03-09 00:19:20.257276 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-09 00:19:20.319755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-09 00:19:20.319846 | orchestrator | 2026-03-09 00:19:20.319871 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-09 00:19:20.977923 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:20.978086 | orchestrator | 2026-03-09 00:19:20.978105 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-09 00:19:21.583295 | orchestrator | ok: [testbed-manager] 2026-03-09 00:19:21.583404 | orchestrator | 2026-03-09 00:19:21.583421 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-09 00:19:21.643780 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:19:21.643902 | orchestrator | 2026-03-09 00:19:21.643918 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-09 00:19:21.709818 | orchestrator | ok: [testbed-manager] 2026-03-09 00:19:21.709930 | orchestrator | 2026-03-09 00:19:21.709954 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-09 00:19:22.540763 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:22.540867 | orchestrator | 2026-03-09 00:19:22.540885 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-09 00:20:36.303159 | orchestrator | changed: [testbed-manager] 2026-03-09 00:20:36.303290 | orchestrator | 2026-03-09 00:20:36.303317 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-09 00:20:37.289226 | orchestrator | ok: [testbed-manager] 2026-03-09 00:20:37.289333 | orchestrator | 2026-03-09 00:20:37.289350 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-09 00:20:37.349067 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:20:37.349168 | orchestrator | 2026-03-09 00:20:37.349183 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-09 00:20:39.774324 | orchestrator | changed: [testbed-manager] 2026-03-09 00:20:39.774419 | orchestrator | 2026-03-09 00:20:39.774430 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-09 00:20:39.832341 | orchestrator | ok: [testbed-manager] 2026-03-09 00:20:39.832438 | orchestrator | 2026-03-09 00:20:39.832454 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-09 00:20:39.832466 | orchestrator | 2026-03-09 00:20:39.832477 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-09 00:20:40.003602 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:20:40.003792 | orchestrator | 2026-03-09 00:20:40.003819 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-09 00:21:40.054000 | orchestrator | Pausing for 60 seconds 2026-03-09 00:21:40.054176 | orchestrator | changed: [testbed-manager] 2026-03-09 00:21:40.054196 | orchestrator | 2026-03-09 00:21:40.054211 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-09 00:21:43.225638 | orchestrator | changed: [testbed-manager] 2026-03-09 00:21:43.225812 | orchestrator | 2026-03-09 00:21:43.225830 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-09 00:22:45.247291 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-09 00:22:45.247396 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-09 00:22:45.247432 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-09 00:22:45.247445 | orchestrator | changed: [testbed-manager] 2026-03-09 00:22:45.247459 | orchestrator | 2026-03-09 00:22:45.247471 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-09 00:22:55.936545 | orchestrator | changed: [testbed-manager] 2026-03-09 00:22:55.936663 | orchestrator | 2026-03-09 00:22:55.936732 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-09 00:22:56.032913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-09 00:22:56.033032 | orchestrator | 2026-03-09 00:22:56.033051 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-09 00:22:56.033064 | orchestrator | 2026-03-09 00:22:56.033076 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-09 00:22:56.088134 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:22:56.088229 | orchestrator | 2026-03-09 00:22:56.088250 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-09 00:22:56.172640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-09 00:22:56.172877 | orchestrator | 2026-03-09 00:22:56.172904 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-09 00:22:56.928329 | orchestrator | changed: [testbed-manager] 2026-03-09 00:22:56.928428 | orchestrator | 2026-03-09 00:22:56.928445 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-09 00:23:00.184533 | orchestrator | ok: [testbed-manager] 2026-03-09 00:23:00.184641 | orchestrator | 2026-03-09 00:23:00.184662 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-09 00:23:00.271578 | orchestrator | ok: [testbed-manager] => { 2026-03-09 00:23:00.271749 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-09 00:23:00.271771 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-09 00:23:00.271783 | orchestrator | "Checking running containers against expected versions...", 2026-03-09 00:23:00.271796 | orchestrator | "", 2026-03-09 00:23:00.271807 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-09 00:23:00.271819 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-09 00:23:00.271831 | orchestrator | " Enabled: true", 2026-03-09 00:23:00.271842 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-09 00:23:00.271853 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:23:00.271864 | orchestrator | "", 2026-03-09 00:23:00.271876 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-09 00:23:00.271887 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-09 00:23:00.271927 | orchestrator | " Enabled: true", 2026-03-09 00:23:00.271939 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-09 00:23:00.271949 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:23:00.271960 | orchestrator | "", 2026-03-09 00:23:00.271971 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-09 00:23:00.271982 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-09 00:23:00.271993 | orchestrator | " Enabled: true", 2026-03-09 00:23:00.272004 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-09 00:23:00.272014 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:23:00.272025 | orchestrator | "", 2026-03-09 00:23:00.272036 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-09 00:23:00.272047 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-09 00:23:00.272058 | orchestrator | " Enabled: true", 2026-03-09 00:23:00.272069 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-09 00:23:00.272080 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:23:00.272093 | orchestrator | "", 2026-03-09 00:23:00.272109 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-09 00:23:00.272122 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-09 00:23:00.272134 | orchestrator | " Enabled: true", 2026-03-09 00:23:00.272146 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-09 00:23:00.272158 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:23:00.272171 | orchestrator | "", 2026-03-09 00:23:00.272184 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-09 00:23:00.272197 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-09 00:23:00.272210 | orchestrator | " Enabled: true", 2026-03-09 00:23:00.272222 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-09 00:23:00.272235 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:23:00.272248 | orchestrator | "", 2026-03-09 00:23:00.272260 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-09 00:23:00.272274 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-09 00:23:00.272286 | orchestrator | " Enabled: true", 2026-03-09 00:23:00.272299 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-09 00:23:00.272312 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:23:00.272324 | orchestrator | "", 2026-03-09 00:23:00.272336 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-09 00:23:00.272348 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-09 00:23:00.272361 | orchestrator | " Enabled: true", 2026-03-09 00:23:00.272373 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-09 00:23:00.272385 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:23:00.272398 | orchestrator | "", 2026-03-09 00:23:00.272410 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-09 00:23:00.272423 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-09 00:23:00.272435 | orchestrator | " Enabled: true", 2026-03-09 00:23:00.272448 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-09 00:23:00.272461 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:23:00.272474 | orchestrator | "", 2026-03-09 00:23:00.272484 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-09 00:23:00.272495 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-09 00:23:00.272506 | orchestrator | " Enabled: true", 2026-03-09 00:23:00.272517 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-09 00:23:00.272527 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:23:00.272538 | orchestrator | "", 2026-03-09 00:23:00.272548 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-09 00:23:00.272559 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-09 00:23:00.272578 | orchestrator | " Enabled: true", 2026-03-09 00:23:00.272589 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-09 00:23:00.272599 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:23:00.272610 | orchestrator | "", 2026-03-09 00:23:00.272621 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-09 00:23:00.272632 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-09 00:23:00.272642 | orchestrator | " Enabled: true", 2026-03-09 00:23:00.272653 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-09 00:23:00.272664 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:23:00.272675 | orchestrator | "", 2026-03-09 00:23:00.272709 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-09 00:23:00.272721 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-09 00:23:00.272732 | orchestrator | " Enabled: true", 2026-03-09 00:23:00.272743 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-09 00:23:00.272753 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:23:00.272764 | orchestrator | "", 2026-03-09 00:23:00.272775 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-09 00:23:00.272786 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-09 00:23:00.272797 | orchestrator | " Enabled: true", 2026-03-09 00:23:00.272808 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-09 00:23:00.272854 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:23:00.272866 | orchestrator | "", 2026-03-09 00:23:00.272877 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-09 00:23:00.272888 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-09 00:23:00.272909 | orchestrator | " Enabled: true", 2026-03-09 00:23:00.272921 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-09 00:23:00.272932 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:23:00.272943 | orchestrator | "", 2026-03-09 00:23:00.272954 | orchestrator | "=== Summary ===", 2026-03-09 00:23:00.272965 | orchestrator | "Errors (version mismatches): 0", 2026-03-09 00:23:00.272976 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-09 00:23:00.272987 | orchestrator | "", 2026-03-09 00:23:00.272998 | orchestrator | "✅ All running containers match expected versions!" 2026-03-09 00:23:00.273009 | orchestrator | ] 2026-03-09 00:23:00.273020 | orchestrator | } 2026-03-09 00:23:00.273031 | orchestrator | 2026-03-09 00:23:00.273042 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-09 00:23:00.322709 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:23:00.322803 | orchestrator | 2026-03-09 00:23:00.322815 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:23:00.322825 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-09 00:23:00.322832 | orchestrator | 2026-03-09 00:23:00.434642 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-09 00:23:00.434803 | orchestrator | + deactivate 2026-03-09 00:23:00.434819 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-09 00:23:00.434834 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-09 00:23:00.434845 | orchestrator | + export PATH 2026-03-09 00:23:00.434857 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-09 00:23:00.434868 | orchestrator | + '[' -n '' ']' 2026-03-09 00:23:00.434880 | orchestrator | + hash -r 2026-03-09 00:23:00.434890 | orchestrator | + '[' -n '' ']' 2026-03-09 00:23:00.434901 | orchestrator | + unset VIRTUAL_ENV 2026-03-09 00:23:00.434912 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-09 00:23:00.434923 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-09 00:23:00.434934 | orchestrator | + unset -f deactivate 2026-03-09 00:23:00.434946 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-09 00:23:00.441200 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-09 00:23:00.441281 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-09 00:23:00.441339 | orchestrator | + local max_attempts=60 2026-03-09 00:23:00.441362 | orchestrator | + local name=ceph-ansible 2026-03-09 00:23:00.441381 | orchestrator | + local attempt_num=1 2026-03-09 00:23:00.441993 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:23:00.476050 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:23:00.476135 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-09 00:23:00.476149 | orchestrator | + local max_attempts=60 2026-03-09 00:23:00.476162 | orchestrator | + local name=kolla-ansible 2026-03-09 00:23:00.476173 | orchestrator | + local attempt_num=1 2026-03-09 00:23:00.476548 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-09 00:23:00.510352 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:23:00.510447 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-09 00:23:00.510462 | orchestrator | + local max_attempts=60 2026-03-09 00:23:00.510474 | orchestrator | + local name=osism-ansible 2026-03-09 00:23:00.510486 | orchestrator | + local attempt_num=1 2026-03-09 00:23:00.511244 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-09 00:23:00.550860 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:23:00.550951 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-09 00:23:00.550967 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-09 00:23:01.160644 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-09 00:23:01.357896 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-09 00:23:01.358147 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-09 00:23:01.358181 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-09 00:23:01.358200 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-09 00:23:01.358220 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-09 00:23:01.358264 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-09 00:23:01.358285 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-09 00:23:01.358305 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-09 00:23:01.358324 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-09 00:23:01.358336 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-09 00:23:01.358347 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-09 00:23:01.358359 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-09 00:23:01.358369 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-09 00:23:01.358416 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-09 00:23:01.358436 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-09 00:23:01.358455 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-09 00:23:01.365149 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-09 00:23:01.413889 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-09 00:23:01.413984 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-09 00:23:01.416744 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-09 00:23:13.623235 | orchestrator | 2026-03-09 00:23:13 | INFO  | Task 881d7832-672e-409a-93b3-ffa5bece6aa8 (resolvconf) was prepared for execution. 2026-03-09 00:23:13.623354 | orchestrator | 2026-03-09 00:23:13 | INFO  | It takes a moment until task 881d7832-672e-409a-93b3-ffa5bece6aa8 (resolvconf) has been started and output is visible here. 2026-03-09 00:23:28.780796 | orchestrator | 2026-03-09 00:23:28.780950 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-09 00:23:28.780973 | orchestrator | 2026-03-09 00:23:28.780986 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:23:28.780998 | orchestrator | Monday 09 March 2026 00:23:17 +0000 (0:00:00.140) 0:00:00.140 ********** 2026-03-09 00:23:28.781010 | orchestrator | ok: [testbed-manager] 2026-03-09 00:23:28.781021 | orchestrator | 2026-03-09 00:23:28.781033 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-09 00:23:28.781044 | orchestrator | Monday 09 March 2026 00:23:22 +0000 (0:00:04.833) 0:00:04.973 ********** 2026-03-09 00:23:28.781055 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:23:28.781068 | orchestrator | 2026-03-09 00:23:28.781079 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-09 00:23:28.781090 | orchestrator | Monday 09 March 2026 00:23:22 +0000 (0:00:00.059) 0:00:05.033 ********** 2026-03-09 00:23:28.781101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-09 00:23:28.781112 | orchestrator | 2026-03-09 00:23:28.781123 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-09 00:23:28.781134 | orchestrator | Monday 09 March 2026 00:23:22 +0000 (0:00:00.074) 0:00:05.108 ********** 2026-03-09 00:23:28.781164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:23:28.781176 | orchestrator | 2026-03-09 00:23:28.781186 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-09 00:23:28.781197 | orchestrator | Monday 09 March 2026 00:23:22 +0000 (0:00:00.084) 0:00:05.192 ********** 2026-03-09 00:23:28.781208 | orchestrator | ok: [testbed-manager] 2026-03-09 00:23:28.781219 | orchestrator | 2026-03-09 00:23:28.781230 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-09 00:23:28.781241 | orchestrator | Monday 09 March 2026 00:23:23 +0000 (0:00:01.170) 0:00:06.362 ********** 2026-03-09 00:23:28.781251 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:23:28.781263 | orchestrator | 2026-03-09 00:23:28.781276 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-09 00:23:28.781289 | orchestrator | Monday 09 March 2026 00:23:24 +0000 (0:00:00.069) 0:00:06.432 ********** 2026-03-09 00:23:28.781324 | orchestrator | ok: [testbed-manager] 2026-03-09 00:23:28.781337 | orchestrator | 2026-03-09 00:23:28.781350 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-09 00:23:28.781363 | orchestrator | Monday 09 March 2026 00:23:24 +0000 (0:00:00.531) 0:00:06.964 ********** 2026-03-09 00:23:28.781375 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:23:28.781387 | orchestrator | 2026-03-09 00:23:28.781400 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-09 00:23:28.781414 | orchestrator | Monday 09 March 2026 00:23:24 +0000 (0:00:00.075) 0:00:07.040 ********** 2026-03-09 00:23:28.781426 | orchestrator | changed: [testbed-manager] 2026-03-09 00:23:28.781439 | orchestrator | 2026-03-09 00:23:28.781451 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-09 00:23:28.781464 | orchestrator | Monday 09 March 2026 00:23:25 +0000 (0:00:00.568) 0:00:07.609 ********** 2026-03-09 00:23:28.781476 | orchestrator | changed: [testbed-manager] 2026-03-09 00:23:28.781489 | orchestrator | 2026-03-09 00:23:28.781501 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-09 00:23:28.781513 | orchestrator | Monday 09 March 2026 00:23:26 +0000 (0:00:01.082) 0:00:08.691 ********** 2026-03-09 00:23:28.781527 | orchestrator | ok: [testbed-manager] 2026-03-09 00:23:28.781539 | orchestrator | 2026-03-09 00:23:28.781553 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-09 00:23:28.781565 | orchestrator | Monday 09 March 2026 00:23:27 +0000 (0:00:01.002) 0:00:09.693 ********** 2026-03-09 00:23:28.781578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-09 00:23:28.781591 | orchestrator | 2026-03-09 00:23:28.781604 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-09 00:23:28.781618 | orchestrator | Monday 09 March 2026 00:23:27 +0000 (0:00:00.090) 0:00:09.784 ********** 2026-03-09 00:23:28.781630 | orchestrator | changed: [testbed-manager] 2026-03-09 00:23:28.781641 | orchestrator | 2026-03-09 00:23:28.781651 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:23:28.781663 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 00:23:28.781674 | orchestrator | 2026-03-09 00:23:28.781684 | orchestrator | 2026-03-09 00:23:28.781721 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:23:28.781735 | orchestrator | Monday 09 March 2026 00:23:28 +0000 (0:00:01.153) 0:00:10.937 ********** 2026-03-09 00:23:28.781746 | orchestrator | =============================================================================== 2026-03-09 00:23:28.781757 | orchestrator | Gathering Facts --------------------------------------------------------- 4.83s 2026-03-09 00:23:28.781767 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.17s 2026-03-09 00:23:28.781778 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.15s 2026-03-09 00:23:28.781789 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2026-03-09 00:23:28.781799 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.00s 2026-03-09 00:23:28.781810 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2026-03-09 00:23:28.781841 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2026-03-09 00:23:28.781852 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-03-09 00:23:28.781863 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-03-09 00:23:28.781874 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-03-09 00:23:28.781884 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-03-09 00:23:28.781895 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-03-09 00:23:28.781969 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-03-09 00:23:29.075876 | orchestrator | + osism apply sshconfig 2026-03-09 00:23:41.239285 | orchestrator | 2026-03-09 00:23:41 | INFO  | Task 3e63eec5-092f-45e3-b9f1-9e8520b5e439 (sshconfig) was prepared for execution. 2026-03-09 00:23:41.239378 | orchestrator | 2026-03-09 00:23:41 | INFO  | It takes a moment until task 3e63eec5-092f-45e3-b9f1-9e8520b5e439 (sshconfig) has been started and output is visible here. 2026-03-09 00:23:53.197907 | orchestrator | 2026-03-09 00:23:53.198071 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-09 00:23:53.198092 | orchestrator | 2026-03-09 00:23:53.198124 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-09 00:23:53.198169 | orchestrator | Monday 09 March 2026 00:23:45 +0000 (0:00:00.161) 0:00:00.161 ********** 2026-03-09 00:23:53.198182 | orchestrator | ok: [testbed-manager] 2026-03-09 00:23:53.198194 | orchestrator | 2026-03-09 00:23:53.198206 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-09 00:23:53.198217 | orchestrator | Monday 09 March 2026 00:23:46 +0000 (0:00:00.530) 0:00:00.692 ********** 2026-03-09 00:23:53.198228 | orchestrator | changed: [testbed-manager] 2026-03-09 00:23:53.198241 | orchestrator | 2026-03-09 00:23:53.198252 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-09 00:23:53.198263 | orchestrator | Monday 09 March 2026 00:23:46 +0000 (0:00:00.518) 0:00:01.210 ********** 2026-03-09 00:23:53.198274 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-09 00:23:53.198285 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-09 00:23:53.198297 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-09 00:23:53.198308 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-09 00:23:53.198318 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-09 00:23:53.198329 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-09 00:23:53.198340 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-09 00:23:53.198351 | orchestrator | 2026-03-09 00:23:53.198362 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-09 00:23:53.198373 | orchestrator | Monday 09 March 2026 00:23:52 +0000 (0:00:05.715) 0:00:06.925 ********** 2026-03-09 00:23:53.198384 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:23:53.198395 | orchestrator | 2026-03-09 00:23:53.198406 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-09 00:23:53.198416 | orchestrator | Monday 09 March 2026 00:23:52 +0000 (0:00:00.087) 0:00:07.013 ********** 2026-03-09 00:23:53.198428 | orchestrator | changed: [testbed-manager] 2026-03-09 00:23:53.198440 | orchestrator | 2026-03-09 00:23:53.198453 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:23:53.198467 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:23:53.198481 | orchestrator | 2026-03-09 00:23:53.198493 | orchestrator | 2026-03-09 00:23:53.198505 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:23:53.198517 | orchestrator | Monday 09 March 2026 00:23:52 +0000 (0:00:00.570) 0:00:07.583 ********** 2026-03-09 00:23:53.198530 | orchestrator | =============================================================================== 2026-03-09 00:23:53.198542 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.72s 2026-03-09 00:23:53.198556 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2026-03-09 00:23:53.198568 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.53s 2026-03-09 00:23:53.198581 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2026-03-09 00:23:53.198594 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-03-09 00:23:53.509350 | orchestrator | + osism apply known-hosts 2026-03-09 00:24:05.640385 | orchestrator | 2026-03-09 00:24:05 | INFO  | Task 88e0f759-95de-4ce1-91c7-1922a80fb48a (known-hosts) was prepared for execution. 2026-03-09 00:24:05.640492 | orchestrator | 2026-03-09 00:24:05 | INFO  | It takes a moment until task 88e0f759-95de-4ce1-91c7-1922a80fb48a (known-hosts) has been started and output is visible here. 2026-03-09 00:24:22.606519 | orchestrator | 2026-03-09 00:24:22.606644 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-09 00:24:22.606662 | orchestrator | 2026-03-09 00:24:22.606675 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-09 00:24:22.606687 | orchestrator | Monday 09 March 2026 00:24:09 +0000 (0:00:00.161) 0:00:00.161 ********** 2026-03-09 00:24:22.606699 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-09 00:24:22.606710 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-09 00:24:22.606782 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-09 00:24:22.606802 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-09 00:24:22.606819 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-09 00:24:22.606838 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-09 00:24:22.606856 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-09 00:24:22.606876 | orchestrator | 2026-03-09 00:24:22.606894 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-09 00:24:22.606914 | orchestrator | Monday 09 March 2026 00:24:15 +0000 (0:00:06.084) 0:00:06.246 ********** 2026-03-09 00:24:22.606927 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-09 00:24:22.606940 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-09 00:24:22.606951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-09 00:24:22.606962 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-09 00:24:22.606973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-09 00:24:22.606995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-09 00:24:22.607007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-09 00:24:22.607020 | orchestrator | 2026-03-09 00:24:22.607033 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:24:22.607046 | orchestrator | Monday 09 March 2026 00:24:16 +0000 (0:00:00.170) 0:00:06.416 ********** 2026-03-09 00:24:22.607060 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILEEvCqLTLkqenPFeFUf/SAnOdp4+Cy+ISv+H/xkGVdd) 2026-03-09 00:24:22.607083 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIsFUMNkq1A3Lvtz4QKZzpZiYjhmHcDOduP43ZG4OwU3lHYPqPHxa/uGS1QdinkxuRwTBsJ2jSkbhIPetJ3UCUTtj79rhMxg2GeRd+cHiLGBOBi2/S7akJraNs1YVl7ptirAJUu9eGqUu4briHh8FbufzjbhnS0cyiYOLZIPaYLswDvVGTMpyuxFVOAQmO2kEsEsXZEttJc8hUCC7fB3LhTM52kYJfDeeEjRlz6FhIAJqEeF122g435Iv+7VeuEyrbku2QuA0XaIE4RqG0Mes2w8p8ND1nHfYVVJTSc8CSMIhrnMl3GpoD88KEgkUYMTTJz0u9DyZsUZA33IpB/iTjD9V/jtZbwWHCVx2xXtFL1/yONujl/P2jPsnFlhvQx2Rkj5Q9VVTkohyhabvCMnKil9QDPNbiOPRa7hFzBsBm4Jonge03Ab+2iRWeKtmvj+c+3IdCuiLrR4hn9fMbKGnBWsT/xqmVzyg/J9Tivrl8T6xvvT7yuTNa9sUFW1BsLDU=) 2026-03-09 00:24:22.607120 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOCI2BEy1X3dRjRBxkm0BGwKxpgGNnA/3BWo3Vvr1sU6pvEgDdC8DwR9a6uPptU0Wo42Ub/chGdA81eGOty0PV4=) 2026-03-09 00:24:22.607135 | orchestrator | 2026-03-09 00:24:22.607149 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:24:22.607161 | orchestrator | Monday 09 March 2026 00:24:17 +0000 (0:00:01.195) 0:00:07.611 ********** 2026-03-09 00:24:22.607193 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChLmhAwtCTH8bbRBazkyJJ+45jOX1luPM54yYGuEmXNrjC9pV0ZBVl0lJ+zP+TgLZPQntHDn6x4n78twHg4vRNMS/OzrfdNu4u7YKV8SK5pBvvyPsKLNuOzv+U8yXHUiWXW3hyhVIY0YOJGhvX7B5t45NROij1+zvZNyBxtoZtZkjiZYZClN79rqG2yFyk7EPxiXYUm2177NLfhDGR35ZjktQdyvwZmN6r5BaPV1ojAVxvuHvPiH27x4nuLsX0k/jRmNq4cGd2qnDgUdYyV5Iqp2CVo8jaeo2C1fs80SLcMSaXJvzP2NKlFhkzOdts/2pOkE4Yq0st6HL+h4TfGTchC/YE21Hj/UqSuXnmVq/sAzREQ9hhFH9unIcKYdmTHJOfzbFD7bxXrJuXq57/dwSFnO93FZskp0+FMdPpjJn+8bXQdm9pV9UeIdCJJZQ/0t1eLUV5OLp1AtAPgkRpI+K5zf+W9Alg1N1c3bOfjtbY4EVgZvSg+wiXcWvFj5OBTuE=) 2026-03-09 00:24:22.607207 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICS+GcxCGeWHuAiMGt3bgGqcH/fFB9OEp9tDivxXx5tl) 2026-03-09 00:24:22.607221 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAGNcFyvrP9sjMEykeu/UGqtpxxFSMkejwMngX0WuawvPT0xZLNRRIxhGs2zhim7wloCBJrjryrSLloPHfSy980=) 2026-03-09 00:24:22.607233 | orchestrator | 2026-03-09 00:24:22.607246 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:24:22.607259 | orchestrator | Monday 09 March 2026 00:24:18 +0000 (0:00:01.079) 0:00:08.691 ********** 2026-03-09 00:24:22.607271 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOQ6+0C3s7lBqeGh2saKpkBIAcSKBVCBWwfk9Gy0nx4VIVbrRAqGiXCK4jnjrJyUy0JY19gkN7beD9O0r2F9TnI=) 2026-03-09 00:24:22.607284 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDgS8CfOVpwSZuOYYMBFfIL0pXbGP54ecXAjW7r2bUnvpMOLNASr5XEnVJPKhyRIjeGPnLz+QljqX+S+BiizOlgCEot/qHIIu9to4Kpuw3csjIkUkC1t+XRETwazG9gkRbUuuK+SH53pCaH3Guljrc/vgzws9uqCRrzPjEMfhG3/r4f/SNgCTp0A1ZRL409b4ORZJCbD+yB2asmQhAGiF/dFrbyumkjwrVR93g5No3Xq0oai7opVqtuR1E5wPF+J99vg6vSmXREusQukLx/mBJKK2jiw6GcLYY4D2p33YjNzoRTdOfViJfyE7KDM8fiq8c8Ufva9jxRL/3k9686xh9or7TsQFVNMwo1PTN4voqmCyEn9QvpESXQp6OS3ydfZa+hcP1X47NN6xZZumlzf/noL64o98A6VYPC1Tvumg3w1ar6OnAh/tHTJUkvri4XWtwkasBfTfti+Wq6Id+N0Sqs3T0pE6GDAWyOjgM4pt/u4B+kmZ4P0pvOIjbe8U1ihQ0=) 2026-03-09 00:24:22.607298 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB+ol7wgzUowLPKBcVq2GjY01ww7rFbb1fewK0Di5k7M) 2026-03-09 00:24:22.607310 | orchestrator | 2026-03-09 00:24:22.607322 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:24:22.607334 | orchestrator | Monday 09 March 2026 00:24:19 +0000 (0:00:01.108) 0:00:09.799 ********** 2026-03-09 00:24:22.607348 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZhreiRqKXGQF6+4U4evIvMF9ZtD6WZtr+saOFq9JN13naCsBfa4tCBBLS6ZHqTZvloGAEeQFWPblbE4/XxHSj0ApuH3jQwlVXiWYG5JHvGlBmHvEHPnorW4fQvXy6zKVKrgK/b6cz4aTQLtjjPQD78xibOa92q0H05cTsE3dmWv94W8rXJm834ZOo1VxqJWYTTZN5s6+nRnPyoOtIJ7QyTkR5P7QpoxxowC1IvglWwuKCfhHgfiZD5NLJfQ8J/NlMOiGY24JTS1/7+jkKo5gg/Nak7ylQ1lta8WGJrqOjxn+nzZmQVlBEbPmRQETlE4O0E+9zQ2vIJU9mNElzbLC1sWnR0UVFLMLNp4FNzgDW8eaBBx7eFJ04m5EUC2frMJRq3L4R6LLay0KqM/0j9gXXxq8Kc0cx6miQcRu5z63RIFrpMV+Hd1Ykvo7o+Uk3dXS5q0lDmqUGRykg8X/ui0nIRIGtKItEvDGlmBo4VB287YDJTQnKbkSA8qISBkI2nRU=) 2026-03-09 00:24:22.607368 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH/mf0SL5z/kFUaYjExW2YzJKmZbZ7gdOMiHYx6n97rsmi3VMHZ1n1p+ltbFEQickQN/QJmuPz4saFVKtd5jPZo=) 2026-03-09 00:24:22.607379 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHH481nMpGl5cJSF0TIApobVOApzoTylcsj+quJ+8j0O) 2026-03-09 00:24:22.607390 | orchestrator | 2026-03-09 00:24:22.607401 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:24:22.607412 | orchestrator | Monday 09 March 2026 00:24:20 +0000 (0:00:01.050) 0:00:10.850 ********** 2026-03-09 00:24:22.607493 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCGu+pwD+keZVwbaMFhCrgQdPZ+hoghg9/yo1uBv07Uf2ewm+vP2oqqKfHX5ytQr50763+Umf/Uh9KdEs5qZp6UCtFla4SjHWIxMv6OTGz8kLvUBtCV1bafuPxk1tT6tzGFBtzVf8k825+lGOHaSDUQsKli0tstm0ci+/6WtwXSuOxxabj/c/8OTy1Xt5HuW67d7McV+eIQBgFy52r0oj/zho1fxltFEtoObOuYms48HiuMtK4x9q6nzG2Br1oLBhQYQIRmdoWB6I36J20Jx4PPZiuUAbTtxrpASP8jpIn+345PqR6jp0l36XE4oAsnNSjXgJUdWHaIqkSYypnTRMzgK9vxTvLBktp+Wgxw3H/YQ8FcantfyweUJx1xv55zG9ZQU1LfQznsdeTfYRt4SHM/pJYY+pKM/VeTWtJd9/jciguJRyXh9CTAqzCVttsS5whmMjM6jRnvdFcX6RDwRQRAYL6cCmjznJvSeZOiqFAS5dyNKWWaLkNVzfXRY+DSz7M=) 2026-03-09 00:24:22.607505 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF6ZFYApnrsBQsLciDU1In5SI4X4lK+jvORtVz4HNV3Fc+iP7H0CvTZ2KHc+5TLdRocXY5Kukw/C7rkhhjLOc4g=) 2026-03-09 00:24:22.607517 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPMapeWiBXvFVz44apdDBVcB+8m7rNjYRwhdM2EF4S9N) 2026-03-09 00:24:22.607527 | orchestrator | 2026-03-09 00:24:22.607538 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:24:22.607549 | orchestrator | Monday 09 March 2026 00:24:21 +0000 (0:00:01.062) 0:00:11.912 ********** 2026-03-09 00:24:22.607567 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOkVGFDcM0FwyTIraUPz1n/xcwC+fNT/mPEiRN7jr0I/Z90EqyqLEJJ3gHkEHcr8xNglVaNkSkJqc3ShxWVy2VA=) 2026-03-09 00:24:33.700256 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSXc9cy2wj455ZKN+zBXemngd3PMBEOU6Q4bAts9qPC37MgQyjv6/kEjRIHKEzi7bMC3eTXnQ8MGykw+n5rJ19EsTch7+toeziPhpiFVrc/2LlgOx4W0lP5qpdwCofRkHxyl+HCXbWGflBmRzKicAiPkl7pW2hH5UeWus6KUXJ8kASqL1Caba91Jo/zlfLxbfDHN95K1zeY/yAQC1PeV/fiRSWwzmdrrLlPp0VlXf1W9hIR6Ldp/1gNGwamAX8hZAGGR13QGfb/bHNv2hBLvJaM8zVI05v/M8sN2s6dvt2gQlSYKspvMYh23cm5ig4ZDaO5aEb2tc4LZLXW6H2QezUXVojlna5z56e6hDgN+KNQzfkBCOrPa6cWXNPary4r69zTHZvPwTB3xacYpaBC2EodKpLyVecofNRLWVmJJqaPw+dO+3ljxYuC+ZNmILSRMqWzpR3/7MGLNHFSwDJS5OQBLds1gSe6GmnRXFnH9qhTurm7squqTOF85A1SkNHqik=) 2026-03-09 00:24:33.700422 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA3UOw/KDIhCgYElYJ94HBzoVeBKifTTifAxqQ7Y5ARC) 2026-03-09 00:24:33.700477 | orchestrator | 2026-03-09 00:24:33.700503 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:24:33.700518 | orchestrator | Monday 09 March 2026 00:24:22 +0000 (0:00:01.082) 0:00:12.995 ********** 2026-03-09 00:24:33.700531 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC6IlQC/faSoJWfqZ6eyxBtfV2gcUeo4/hgy5+EHGWACU9KfUjVrTnkUFfjB4oRnj0En72vCn852gO/Z7yYUugU=) 2026-03-09 00:24:33.700547 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8qsMmPS7/fkAlCELQ25Q1EAZLnUDyZwPbukyu3uG5FMXw+mP8HVfyaFO16E24EE/16gY+1j1U4Ojpbq5W3/ahULMWrnEV+L7jPmcZs5PGvv9lMetLl9feJBwfM7GjHvcMud5+PTq6BZ8I/toQp9zMMajw5GBshiHCD/guFxwjyh2L7mRodJ6mcowiLCzovR/IfQAxviB19A6U/P5xb69cpRbkezdkdyw56vDNaIYtIPssRfQpA6zZZPvEqlVDqtfHc2bhBxAhHZmIebj/fRyPSJanvxIkcnGnwPbTgzgMDTMV1IdsWIZa3KW6X3JKSlhg8HML9q2cTwVQ0jPVlBqwEE2LTZp99QlTcwF8yZqWbvt+YoJBB91AbAKa98HMiTDER2n+eQNu/D1pBhpcrTFY6IEhOh8SwnAO0yr8Ip0Jqh3a/Zrh72ABeUBU6Z2RRtRUbugti9Gnkn65CgSdlun1ESla6IOf+2o4SUe2ymCa9eixPvj0db6idmpQG2mBTYc=) 2026-03-09 00:24:33.700587 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGJ1TxenTA51w5pKsgjbMC114JCxsNTOP7/4qZGl67G9) 2026-03-09 00:24:33.700600 | orchestrator | 2026-03-09 00:24:33.700611 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-09 00:24:33.700624 | orchestrator | Monday 09 March 2026 00:24:23 +0000 (0:00:01.059) 0:00:14.055 ********** 2026-03-09 00:24:33.700635 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-09 00:24:33.700647 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-09 00:24:33.700658 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-09 00:24:33.700668 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-09 00:24:33.700679 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-09 00:24:33.700690 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-09 00:24:33.700701 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-09 00:24:33.700711 | orchestrator | 2026-03-09 00:24:33.700751 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-09 00:24:33.700767 | orchestrator | Monday 09 March 2026 00:24:29 +0000 (0:00:05.418) 0:00:19.473 ********** 2026-03-09 00:24:33.700782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-09 00:24:33.700798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-09 00:24:33.700810 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-09 00:24:33.700823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-09 00:24:33.700835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-09 00:24:33.700848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-09 00:24:33.700861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-09 00:24:33.700874 | orchestrator | 2026-03-09 00:24:33.700906 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:24:33.700920 | orchestrator | Monday 09 March 2026 00:24:29 +0000 (0:00:00.168) 0:00:19.642 ********** 2026-03-09 00:24:33.700933 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOCI2BEy1X3dRjRBxkm0BGwKxpgGNnA/3BWo3Vvr1sU6pvEgDdC8DwR9a6uPptU0Wo42Ub/chGdA81eGOty0PV4=) 2026-03-09 00:24:33.700955 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIsFUMNkq1A3Lvtz4QKZzpZiYjhmHcDOduP43ZG4OwU3lHYPqPHxa/uGS1QdinkxuRwTBsJ2jSkbhIPetJ3UCUTtj79rhMxg2GeRd+cHiLGBOBi2/S7akJraNs1YVl7ptirAJUu9eGqUu4briHh8FbufzjbhnS0cyiYOLZIPaYLswDvVGTMpyuxFVOAQmO2kEsEsXZEttJc8hUCC7fB3LhTM52kYJfDeeEjRlz6FhIAJqEeF122g435Iv+7VeuEyrbku2QuA0XaIE4RqG0Mes2w8p8ND1nHfYVVJTSc8CSMIhrnMl3GpoD88KEgkUYMTTJz0u9DyZsUZA33IpB/iTjD9V/jtZbwWHCVx2xXtFL1/yONujl/P2jPsnFlhvQx2Rkj5Q9VVTkohyhabvCMnKil9QDPNbiOPRa7hFzBsBm4Jonge03Ab+2iRWeKtmvj+c+3IdCuiLrR4hn9fMbKGnBWsT/xqmVzyg/J9Tivrl8T6xvvT7yuTNa9sUFW1BsLDU=) 2026-03-09 00:24:33.700979 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILEEvCqLTLkqenPFeFUf/SAnOdp4+Cy+ISv+H/xkGVdd) 2026-03-09 00:24:33.700992 | orchestrator | 2026-03-09 00:24:33.701006 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:24:33.701019 | orchestrator | Monday 09 March 2026 00:24:30 +0000 (0:00:01.080) 0:00:20.722 ********** 2026-03-09 00:24:33.701033 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAGNcFyvrP9sjMEykeu/UGqtpxxFSMkejwMngX0WuawvPT0xZLNRRIxhGs2zhim7wloCBJrjryrSLloPHfSy980=) 2026-03-09 00:24:33.701046 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChLmhAwtCTH8bbRBazkyJJ+45jOX1luPM54yYGuEmXNrjC9pV0ZBVl0lJ+zP+TgLZPQntHDn6x4n78twHg4vRNMS/OzrfdNu4u7YKV8SK5pBvvyPsKLNuOzv+U8yXHUiWXW3hyhVIY0YOJGhvX7B5t45NROij1+zvZNyBxtoZtZkjiZYZClN79rqG2yFyk7EPxiXYUm2177NLfhDGR35ZjktQdyvwZmN6r5BaPV1ojAVxvuHvPiH27x4nuLsX0k/jRmNq4cGd2qnDgUdYyV5Iqp2CVo8jaeo2C1fs80SLcMSaXJvzP2NKlFhkzOdts/2pOkE4Yq0st6HL+h4TfGTchC/YE21Hj/UqSuXnmVq/sAzREQ9hhFH9unIcKYdmTHJOfzbFD7bxXrJuXq57/dwSFnO93FZskp0+FMdPpjJn+8bXQdm9pV9UeIdCJJZQ/0t1eLUV5OLp1AtAPgkRpI+K5zf+W9Alg1N1c3bOfjtbY4EVgZvSg+wiXcWvFj5OBTuE=) 2026-03-09 00:24:33.701059 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICS+GcxCGeWHuAiMGt3bgGqcH/fFB9OEp9tDivxXx5tl) 2026-03-09 00:24:33.701073 | orchestrator | 2026-03-09 00:24:33.701086 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:24:33.701098 | orchestrator | Monday 09 March 2026 00:24:31 +0000 (0:00:01.113) 0:00:21.835 ********** 2026-03-09 00:24:33.701109 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB+ol7wgzUowLPKBcVq2GjY01ww7rFbb1fewK0Di5k7M) 2026-03-09 00:24:33.701120 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDgS8CfOVpwSZuOYYMBFfIL0pXbGP54ecXAjW7r2bUnvpMOLNASr5XEnVJPKhyRIjeGPnLz+QljqX+S+BiizOlgCEot/qHIIu9to4Kpuw3csjIkUkC1t+XRETwazG9gkRbUuuK+SH53pCaH3Guljrc/vgzws9uqCRrzPjEMfhG3/r4f/SNgCTp0A1ZRL409b4ORZJCbD+yB2asmQhAGiF/dFrbyumkjwrVR93g5No3Xq0oai7opVqtuR1E5wPF+J99vg6vSmXREusQukLx/mBJKK2jiw6GcLYY4D2p33YjNzoRTdOfViJfyE7KDM8fiq8c8Ufva9jxRL/3k9686xh9or7TsQFVNMwo1PTN4voqmCyEn9QvpESXQp6OS3ydfZa+hcP1X47NN6xZZumlzf/noL64o98A6VYPC1Tvumg3w1ar6OnAh/tHTJUkvri4XWtwkasBfTfti+Wq6Id+N0Sqs3T0pE6GDAWyOjgM4pt/u4B+kmZ4P0pvOIjbe8U1ihQ0=) 2026-03-09 00:24:33.701132 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOQ6+0C3s7lBqeGh2saKpkBIAcSKBVCBWwfk9Gy0nx4VIVbrRAqGiXCK4jnjrJyUy0JY19gkN7beD9O0r2F9TnI=) 2026-03-09 00:24:33.701143 | orchestrator | 2026-03-09 00:24:33.701154 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:24:33.701165 | orchestrator | Monday 09 March 2026 00:24:32 +0000 (0:00:01.150) 0:00:22.986 ********** 2026-03-09 00:24:33.701176 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHH481nMpGl5cJSF0TIApobVOApzoTylcsj+quJ+8j0O) 2026-03-09 00:24:33.701201 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZhreiRqKXGQF6+4U4evIvMF9ZtD6WZtr+saOFq9JN13naCsBfa4tCBBLS6ZHqTZvloGAEeQFWPblbE4/XxHSj0ApuH3jQwlVXiWYG5JHvGlBmHvEHPnorW4fQvXy6zKVKrgK/b6cz4aTQLtjjPQD78xibOa92q0H05cTsE3dmWv94W8rXJm834ZOo1VxqJWYTTZN5s6+nRnPyoOtIJ7QyTkR5P7QpoxxowC1IvglWwuKCfhHgfiZD5NLJfQ8J/NlMOiGY24JTS1/7+jkKo5gg/Nak7ylQ1lta8WGJrqOjxn+nzZmQVlBEbPmRQETlE4O0E+9zQ2vIJU9mNElzbLC1sWnR0UVFLMLNp4FNzgDW8eaBBx7eFJ04m5EUC2frMJRq3L4R6LLay0KqM/0j9gXXxq8Kc0cx6miQcRu5z63RIFrpMV+Hd1Ykvo7o+Uk3dXS5q0lDmqUGRykg8X/ui0nIRIGtKItEvDGlmBo4VB287YDJTQnKbkSA8qISBkI2nRU=) 2026-03-09 00:24:38.200650 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH/mf0SL5z/kFUaYjExW2YzJKmZbZ7gdOMiHYx6n97rsmi3VMHZ1n1p+ltbFEQickQN/QJmuPz4saFVKtd5jPZo=) 2026-03-09 00:24:38.200846 | orchestrator | 2026-03-09 00:24:38.200865 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:24:38.200876 | orchestrator | Monday 09 March 2026 00:24:33 +0000 (0:00:01.099) 0:00:24.085 ********** 2026-03-09 00:24:38.200886 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF6ZFYApnrsBQsLciDU1In5SI4X4lK+jvORtVz4HNV3Fc+iP7H0CvTZ2KHc+5TLdRocXY5Kukw/C7rkhhjLOc4g=) 2026-03-09 00:24:38.200897 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCGu+pwD+keZVwbaMFhCrgQdPZ+hoghg9/yo1uBv07Uf2ewm+vP2oqqKfHX5ytQr50763+Umf/Uh9KdEs5qZp6UCtFla4SjHWIxMv6OTGz8kLvUBtCV1bafuPxk1tT6tzGFBtzVf8k825+lGOHaSDUQsKli0tstm0ci+/6WtwXSuOxxabj/c/8OTy1Xt5HuW67d7McV+eIQBgFy52r0oj/zho1fxltFEtoObOuYms48HiuMtK4x9q6nzG2Br1oLBhQYQIRmdoWB6I36J20Jx4PPZiuUAbTtxrpASP8jpIn+345PqR6jp0l36XE4oAsnNSjXgJUdWHaIqkSYypnTRMzgK9vxTvLBktp+Wgxw3H/YQ8FcantfyweUJx1xv55zG9ZQU1LfQznsdeTfYRt4SHM/pJYY+pKM/VeTWtJd9/jciguJRyXh9CTAqzCVttsS5whmMjM6jRnvdFcX6RDwRQRAYL6cCmjznJvSeZOiqFAS5dyNKWWaLkNVzfXRY+DSz7M=) 2026-03-09 00:24:38.200909 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPMapeWiBXvFVz44apdDBVcB+8m7rNjYRwhdM2EF4S9N) 2026-03-09 00:24:38.200919 | orchestrator | 2026-03-09 00:24:38.200928 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:24:38.200937 | orchestrator | Monday 09 March 2026 00:24:34 +0000 (0:00:01.109) 0:00:25.195 ********** 2026-03-09 00:24:38.200945 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOkVGFDcM0FwyTIraUPz1n/xcwC+fNT/mPEiRN7jr0I/Z90EqyqLEJJ3gHkEHcr8xNglVaNkSkJqc3ShxWVy2VA=) 2026-03-09 00:24:38.200955 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSXc9cy2wj455ZKN+zBXemngd3PMBEOU6Q4bAts9qPC37MgQyjv6/kEjRIHKEzi7bMC3eTXnQ8MGykw+n5rJ19EsTch7+toeziPhpiFVrc/2LlgOx4W0lP5qpdwCofRkHxyl+HCXbWGflBmRzKicAiPkl7pW2hH5UeWus6KUXJ8kASqL1Caba91Jo/zlfLxbfDHN95K1zeY/yAQC1PeV/fiRSWwzmdrrLlPp0VlXf1W9hIR6Ldp/1gNGwamAX8hZAGGR13QGfb/bHNv2hBLvJaM8zVI05v/M8sN2s6dvt2gQlSYKspvMYh23cm5ig4ZDaO5aEb2tc4LZLXW6H2QezUXVojlna5z56e6hDgN+KNQzfkBCOrPa6cWXNPary4r69zTHZvPwTB3xacYpaBC2EodKpLyVecofNRLWVmJJqaPw+dO+3ljxYuC+ZNmILSRMqWzpR3/7MGLNHFSwDJS5OQBLds1gSe6GmnRXFnH9qhTurm7squqTOF85A1SkNHqik=) 2026-03-09 00:24:38.200964 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA3UOw/KDIhCgYElYJ94HBzoVeBKifTTifAxqQ7Y5ARC) 2026-03-09 00:24:38.200972 | orchestrator | 2026-03-09 00:24:38.200981 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:24:38.200990 | orchestrator | Monday 09 March 2026 00:24:35 +0000 (0:00:01.085) 0:00:26.280 ********** 2026-03-09 00:24:38.200999 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC6IlQC/faSoJWfqZ6eyxBtfV2gcUeo4/hgy5+EHGWACU9KfUjVrTnkUFfjB4oRnj0En72vCn852gO/Z7yYUugU=) 2026-03-09 00:24:38.201022 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8qsMmPS7/fkAlCELQ25Q1EAZLnUDyZwPbukyu3uG5FMXw+mP8HVfyaFO16E24EE/16gY+1j1U4Ojpbq5W3/ahULMWrnEV+L7jPmcZs5PGvv9lMetLl9feJBwfM7GjHvcMud5+PTq6BZ8I/toQp9zMMajw5GBshiHCD/guFxwjyh2L7mRodJ6mcowiLCzovR/IfQAxviB19A6U/P5xb69cpRbkezdkdyw56vDNaIYtIPssRfQpA6zZZPvEqlVDqtfHc2bhBxAhHZmIebj/fRyPSJanvxIkcnGnwPbTgzgMDTMV1IdsWIZa3KW6X3JKSlhg8HML9q2cTwVQ0jPVlBqwEE2LTZp99QlTcwF8yZqWbvt+YoJBB91AbAKa98HMiTDER2n+eQNu/D1pBhpcrTFY6IEhOh8SwnAO0yr8Ip0Jqh3a/Zrh72ABeUBU6Z2RRtRUbugti9Gnkn65CgSdlun1ESla6IOf+2o4SUe2ymCa9eixPvj0db6idmpQG2mBTYc=) 2026-03-09 00:24:38.201032 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGJ1TxenTA51w5pKsgjbMC114JCxsNTOP7/4qZGl67G9) 2026-03-09 00:24:38.201041 | orchestrator | 2026-03-09 00:24:38.201050 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-09 00:24:38.201065 | orchestrator | Monday 09 March 2026 00:24:36 +0000 (0:00:01.074) 0:00:27.355 ********** 2026-03-09 00:24:38.201075 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-09 00:24:38.201084 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-09 00:24:38.201093 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-09 00:24:38.201102 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-09 00:24:38.201127 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-09 00:24:38.201136 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-09 00:24:38.201145 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-09 00:24:38.201154 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:24:38.201162 | orchestrator | 2026-03-09 00:24:38.201171 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-09 00:24:38.201180 | orchestrator | Monday 09 March 2026 00:24:37 +0000 (0:00:00.155) 0:00:27.510 ********** 2026-03-09 00:24:38.201190 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:24:38.201200 | orchestrator | 2026-03-09 00:24:38.201210 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-09 00:24:38.201220 | orchestrator | Monday 09 March 2026 00:24:37 +0000 (0:00:00.051) 0:00:27.561 ********** 2026-03-09 00:24:38.201234 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:24:38.201244 | orchestrator | 2026-03-09 00:24:38.201254 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-09 00:24:38.201264 | orchestrator | Monday 09 March 2026 00:24:37 +0000 (0:00:00.048) 0:00:27.610 ********** 2026-03-09 00:24:38.201273 | orchestrator | changed: [testbed-manager] 2026-03-09 00:24:38.201283 | orchestrator | 2026-03-09 00:24:38.201293 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:24:38.201304 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 00:24:38.201316 | orchestrator | 2026-03-09 00:24:38.201325 | orchestrator | 2026-03-09 00:24:38.201335 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:24:38.201345 | orchestrator | Monday 09 March 2026 00:24:37 +0000 (0:00:00.752) 0:00:28.362 ********** 2026-03-09 00:24:38.201355 | orchestrator | =============================================================================== 2026-03-09 00:24:38.201365 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.08s 2026-03-09 00:24:38.201375 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.42s 2026-03-09 00:24:38.201385 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-03-09 00:24:38.201395 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-03-09 00:24:38.201406 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-03-09 00:24:38.201416 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-03-09 00:24:38.201426 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-03-09 00:24:38.201435 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-09 00:24:38.201445 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-09 00:24:38.201456 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-09 00:24:38.201466 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-09 00:24:38.201475 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-09 00:24:38.201485 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-09 00:24:38.201495 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-09 00:24:38.201510 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-09 00:24:38.201520 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-09 00:24:38.201531 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.75s 2026-03-09 00:24:38.201541 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-03-09 00:24:38.201551 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-03-09 00:24:38.201559 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-03-09 00:24:38.519216 | orchestrator | + osism apply squid 2026-03-09 00:24:50.577199 | orchestrator | 2026-03-09 00:24:50 | INFO  | Task 9ce95d64-8c55-4e36-b773-2f9ad4f9b9b8 (squid) was prepared for execution. 2026-03-09 00:24:50.577303 | orchestrator | 2026-03-09 00:24:50 | INFO  | It takes a moment until task 9ce95d64-8c55-4e36-b773-2f9ad4f9b9b8 (squid) has been started and output is visible here. 2026-03-09 00:26:44.962404 | orchestrator | 2026-03-09 00:26:44.962519 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-09 00:26:44.962540 | orchestrator | 2026-03-09 00:26:44.962553 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-09 00:26:44.962563 | orchestrator | Monday 09 March 2026 00:24:54 +0000 (0:00:00.170) 0:00:00.170 ********** 2026-03-09 00:26:44.962570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:26:44.962578 | orchestrator | 2026-03-09 00:26:44.962585 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-09 00:26:44.962592 | orchestrator | Monday 09 March 2026 00:24:54 +0000 (0:00:00.087) 0:00:00.258 ********** 2026-03-09 00:26:44.962599 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:44.962606 | orchestrator | 2026-03-09 00:26:44.962613 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-09 00:26:44.962620 | orchestrator | Monday 09 March 2026 00:24:56 +0000 (0:00:01.532) 0:00:01.791 ********** 2026-03-09 00:26:44.962627 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-09 00:26:44.962634 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-09 00:26:44.962640 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-09 00:26:44.962647 | orchestrator | 2026-03-09 00:26:44.962654 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-09 00:26:44.962660 | orchestrator | Monday 09 March 2026 00:24:57 +0000 (0:00:01.203) 0:00:02.995 ********** 2026-03-09 00:26:44.962667 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-09 00:26:44.962674 | orchestrator | 2026-03-09 00:26:44.962680 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-09 00:26:44.962692 | orchestrator | Monday 09 March 2026 00:24:58 +0000 (0:00:01.103) 0:00:04.098 ********** 2026-03-09 00:26:44.962702 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:44.962714 | orchestrator | 2026-03-09 00:26:44.962725 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-09 00:26:44.962813 | orchestrator | Monday 09 March 2026 00:24:59 +0000 (0:00:00.348) 0:00:04.447 ********** 2026-03-09 00:26:44.962822 | orchestrator | changed: [testbed-manager] 2026-03-09 00:26:44.962829 | orchestrator | 2026-03-09 00:26:44.962835 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-09 00:26:44.962842 | orchestrator | Monday 09 March 2026 00:25:00 +0000 (0:00:00.916) 0:00:05.364 ********** 2026-03-09 00:26:44.962849 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-09 00:26:44.962860 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:44.962867 | orchestrator | 2026-03-09 00:26:44.962874 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-09 00:26:44.962904 | orchestrator | Monday 09 March 2026 00:25:31 +0000 (0:00:31.801) 0:00:37.165 ********** 2026-03-09 00:26:44.962911 | orchestrator | changed: [testbed-manager] 2026-03-09 00:26:44.962918 | orchestrator | 2026-03-09 00:26:44.962925 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-09 00:26:44.962931 | orchestrator | Monday 09 March 2026 00:25:43 +0000 (0:00:12.034) 0:00:49.199 ********** 2026-03-09 00:26:44.962938 | orchestrator | Pausing for 60 seconds 2026-03-09 00:26:44.962946 | orchestrator | changed: [testbed-manager] 2026-03-09 00:26:44.962955 | orchestrator | 2026-03-09 00:26:44.962963 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-09 00:26:44.962973 | orchestrator | Monday 09 March 2026 00:26:43 +0000 (0:01:00.118) 0:01:49.318 ********** 2026-03-09 00:26:44.962985 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:44.962995 | orchestrator | 2026-03-09 00:26:44.963007 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-09 00:26:44.963018 | orchestrator | Monday 09 March 2026 00:26:44 +0000 (0:00:00.072) 0:01:49.390 ********** 2026-03-09 00:26:44.963029 | orchestrator | changed: [testbed-manager] 2026-03-09 00:26:44.963040 | orchestrator | 2026-03-09 00:26:44.963050 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:26:44.963061 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:26:44.963072 | orchestrator | 2026-03-09 00:26:44.963082 | orchestrator | 2026-03-09 00:26:44.963093 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:26:44.963105 | orchestrator | Monday 09 March 2026 00:26:44 +0000 (0:00:00.637) 0:01:50.027 ********** 2026-03-09 00:26:44.963116 | orchestrator | =============================================================================== 2026-03-09 00:26:44.963127 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.12s 2026-03-09 00:26:44.963139 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.80s 2026-03-09 00:26:44.963146 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.03s 2026-03-09 00:26:44.963183 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.53s 2026-03-09 00:26:44.963200 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.20s 2026-03-09 00:26:44.963207 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.10s 2026-03-09 00:26:44.963214 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2026-03-09 00:26:44.963229 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2026-03-09 00:26:44.963236 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-03-09 00:26:44.963243 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-03-09 00:26:44.963249 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-09 00:26:45.297658 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-09 00:26:45.297823 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-09 00:26:45.344503 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-09 00:26:45.344601 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-09 00:26:45.349711 | orchestrator | + set -e 2026-03-09 00:26:45.349805 | orchestrator | + NAMESPACE=kolla/release 2026-03-09 00:26:45.349824 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-09 00:26:45.356154 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-09 00:26:45.417257 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-09 00:26:45.417527 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-09 00:26:57.586519 | orchestrator | 2026-03-09 00:26:57 | INFO  | Task 9afb3ac4-7d62-4487-bc2e-79aa55c872c5 (operator) was prepared for execution. 2026-03-09 00:26:57.586626 | orchestrator | 2026-03-09 00:26:57 | INFO  | It takes a moment until task 9afb3ac4-7d62-4487-bc2e-79aa55c872c5 (operator) has been started and output is visible here. 2026-03-09 00:27:14.643229 | orchestrator | 2026-03-09 00:27:14.643347 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-09 00:27:14.643372 | orchestrator | 2026-03-09 00:27:14.643389 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:27:14.643406 | orchestrator | Monday 09 March 2026 00:27:01 +0000 (0:00:00.155) 0:00:00.155 ********** 2026-03-09 00:27:14.643424 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:27:14.643442 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:27:14.643458 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:27:14.643477 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:27:14.643494 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:27:14.643511 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:27:14.643527 | orchestrator | 2026-03-09 00:27:14.643544 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-09 00:27:14.643561 | orchestrator | Monday 09 March 2026 00:27:06 +0000 (0:00:04.273) 0:00:04.428 ********** 2026-03-09 00:27:14.643577 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:27:14.643596 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:27:14.643613 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:27:14.643652 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:27:14.643670 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:27:14.643687 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:27:14.643704 | orchestrator | 2026-03-09 00:27:14.643720 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-09 00:27:14.643736 | orchestrator | 2026-03-09 00:27:14.643751 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-09 00:27:14.643768 | orchestrator | Monday 09 March 2026 00:27:06 +0000 (0:00:00.784) 0:00:05.213 ********** 2026-03-09 00:27:14.643841 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:27:14.643859 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:27:14.643875 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:27:14.643892 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:27:14.643907 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:27:14.643926 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:27:14.643942 | orchestrator | 2026-03-09 00:27:14.643958 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-09 00:27:14.643975 | orchestrator | Monday 09 March 2026 00:27:07 +0000 (0:00:00.173) 0:00:05.386 ********** 2026-03-09 00:27:14.643992 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:27:14.644009 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:27:14.644020 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:27:14.644029 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:27:14.644039 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:27:14.644048 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:27:14.644058 | orchestrator | 2026-03-09 00:27:14.644067 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-09 00:27:14.644077 | orchestrator | Monday 09 March 2026 00:27:07 +0000 (0:00:00.162) 0:00:05.548 ********** 2026-03-09 00:27:14.644087 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:27:14.644101 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:27:14.644118 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:27:14.644134 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:27:14.644150 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:27:14.644165 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:27:14.644180 | orchestrator | 2026-03-09 00:27:14.644196 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-09 00:27:14.644213 | orchestrator | Monday 09 March 2026 00:27:07 +0000 (0:00:00.631) 0:00:06.180 ********** 2026-03-09 00:27:14.644229 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:27:14.644246 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:27:14.644263 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:27:14.644279 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:27:14.644295 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:27:14.644311 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:27:14.644360 | orchestrator | 2026-03-09 00:27:14.644378 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-09 00:27:14.644394 | orchestrator | Monday 09 March 2026 00:27:08 +0000 (0:00:00.802) 0:00:06.982 ********** 2026-03-09 00:27:14.644410 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-09 00:27:14.644427 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-09 00:27:14.644444 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-09 00:27:14.644460 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-09 00:27:14.644476 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-09 00:27:14.644492 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-09 00:27:14.644508 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-09 00:27:14.644524 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-09 00:27:14.644540 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-09 00:27:14.644557 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-09 00:27:14.644573 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-09 00:27:14.644606 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-09 00:27:14.644623 | orchestrator | 2026-03-09 00:27:14.644654 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-09 00:27:14.644671 | orchestrator | Monday 09 March 2026 00:27:09 +0000 (0:00:01.178) 0:00:08.161 ********** 2026-03-09 00:27:14.644688 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:27:14.644704 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:27:14.644719 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:27:14.644736 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:27:14.644752 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:27:14.644792 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:27:14.644810 | orchestrator | 2026-03-09 00:27:14.644827 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-09 00:27:14.644844 | orchestrator | Monday 09 March 2026 00:27:11 +0000 (0:00:01.197) 0:00:09.358 ********** 2026-03-09 00:27:14.644860 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-09 00:27:14.644876 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-09 00:27:14.644892 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-09 00:27:14.644908 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:27:14.644951 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:27:14.644969 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:27:14.644985 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:27:14.645002 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:27:14.645018 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:27:14.645034 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-09 00:27:14.645051 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-09 00:27:14.645066 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-09 00:27:14.645082 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-09 00:27:14.645098 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-09 00:27:14.645113 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-09 00:27:14.645130 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:27:14.645143 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:27:14.645153 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:27:14.645163 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:27:14.645172 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:27:14.645193 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:27:14.645202 | orchestrator | 2026-03-09 00:27:14.645212 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-09 00:27:14.645222 | orchestrator | Monday 09 March 2026 00:27:12 +0000 (0:00:01.202) 0:00:10.560 ********** 2026-03-09 00:27:14.645231 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:27:14.645241 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:27:14.645250 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:27:14.645259 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:27:14.645269 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:27:14.645278 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:27:14.645292 | orchestrator | 2026-03-09 00:27:14.645309 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-09 00:27:14.645323 | orchestrator | Monday 09 March 2026 00:27:12 +0000 (0:00:00.161) 0:00:10.722 ********** 2026-03-09 00:27:14.645336 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:27:14.645350 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:27:14.645363 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:27:14.645377 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:27:14.645391 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:27:14.645405 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:27:14.645420 | orchestrator | 2026-03-09 00:27:14.645437 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-09 00:27:14.645454 | orchestrator | Monday 09 March 2026 00:27:12 +0000 (0:00:00.257) 0:00:10.979 ********** 2026-03-09 00:27:14.645471 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:27:14.645481 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:27:14.645491 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:27:14.645500 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:27:14.645509 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:27:14.645518 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:27:14.645528 | orchestrator | 2026-03-09 00:27:14.645537 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-09 00:27:14.645547 | orchestrator | Monday 09 March 2026 00:27:13 +0000 (0:00:00.669) 0:00:11.649 ********** 2026-03-09 00:27:14.645556 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:27:14.645565 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:27:14.645575 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:27:14.645584 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:27:14.645593 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:27:14.645602 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:27:14.645612 | orchestrator | 2026-03-09 00:27:14.645621 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-09 00:27:14.645631 | orchestrator | Monday 09 March 2026 00:27:13 +0000 (0:00:00.275) 0:00:11.925 ********** 2026-03-09 00:27:14.645640 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 00:27:14.645662 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:27:14.645672 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 00:27:14.645681 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:27:14.645691 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-09 00:27:14.645700 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:27:14.645709 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 00:27:14.645718 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:27:14.645728 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-09 00:27:14.645737 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:27:14.645746 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 00:27:14.645755 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:27:14.645765 | orchestrator | 2026-03-09 00:27:14.645803 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-09 00:27:14.645813 | orchestrator | Monday 09 March 2026 00:27:14 +0000 (0:00:00.673) 0:00:12.598 ********** 2026-03-09 00:27:14.645831 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:27:14.645840 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:27:14.645850 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:27:14.645859 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:27:14.645869 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:27:14.645878 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:27:14.645887 | orchestrator | 2026-03-09 00:27:14.645897 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-09 00:27:14.645906 | orchestrator | Monday 09 March 2026 00:27:14 +0000 (0:00:00.209) 0:00:12.808 ********** 2026-03-09 00:27:14.645916 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:27:14.645925 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:27:14.645935 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:27:14.645944 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:27:14.645965 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:27:16.934544 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:27:16.934640 | orchestrator | 2026-03-09 00:27:16.934655 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-09 00:27:16.934667 | orchestrator | Monday 09 March 2026 00:27:14 +0000 (0:00:00.158) 0:00:12.966 ********** 2026-03-09 00:27:16.934677 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:27:16.934687 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:27:16.934697 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:27:16.934706 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:27:16.934716 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:27:16.934726 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:27:16.934735 | orchestrator | 2026-03-09 00:27:16.934745 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-09 00:27:16.934755 | orchestrator | Monday 09 March 2026 00:27:14 +0000 (0:00:00.175) 0:00:13.142 ********** 2026-03-09 00:27:16.934764 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:27:16.934837 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:27:16.934865 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:27:16.934875 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:27:16.934885 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:27:16.934894 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:27:16.934903 | orchestrator | 2026-03-09 00:27:16.934913 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-09 00:27:16.934923 | orchestrator | Monday 09 March 2026 00:27:16 +0000 (0:00:01.562) 0:00:14.705 ********** 2026-03-09 00:27:16.934940 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:27:16.934964 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:27:16.934983 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:27:16.934999 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:27:16.935014 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:27:16.935029 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:27:16.935046 | orchestrator | 2026-03-09 00:27:16.935062 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:27:16.935082 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:27:16.935098 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:27:16.935110 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:27:16.935121 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:27:16.935133 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:27:16.935165 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:27:16.935175 | orchestrator | 2026-03-09 00:27:16.935197 | orchestrator | 2026-03-09 00:27:16.935207 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:27:16.935217 | orchestrator | Monday 09 March 2026 00:27:16 +0000 (0:00:00.292) 0:00:14.997 ********** 2026-03-09 00:27:16.935226 | orchestrator | =============================================================================== 2026-03-09 00:27:16.935236 | orchestrator | Gathering Facts --------------------------------------------------------- 4.27s 2026-03-09 00:27:16.935246 | orchestrator | osism.commons.operator : Set password ----------------------------------- 1.56s 2026-03-09 00:27:16.935255 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.20s 2026-03-09 00:27:16.935266 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.20s 2026-03-09 00:27:16.935276 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2026-03-09 00:27:16.935285 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2026-03-09 00:27:16.935295 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2026-03-09 00:27:16.935305 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.67s 2026-03-09 00:27:16.935321 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.67s 2026-03-09 00:27:16.935345 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2026-03-09 00:27:16.935365 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.29s 2026-03-09 00:27:16.935381 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.28s 2026-03-09 00:27:16.935397 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.26s 2026-03-09 00:27:16.935414 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.21s 2026-03-09 00:27:16.935429 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2026-03-09 00:27:16.935446 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-03-09 00:27:16.935462 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2026-03-09 00:27:16.935479 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-03-09 00:27:16.935497 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-03-09 00:27:17.276427 | orchestrator | + osism apply --environment custom facts 2026-03-09 00:27:19.250397 | orchestrator | 2026-03-09 00:27:19 | INFO  | Trying to run play facts in environment custom 2026-03-09 00:27:29.375650 | orchestrator | 2026-03-09 00:27:29 | INFO  | Task fbe67123-f964-4a97-bd16-f1b6f0fbca62 (facts) was prepared for execution. 2026-03-09 00:27:29.375764 | orchestrator | 2026-03-09 00:27:29 | INFO  | It takes a moment until task fbe67123-f964-4a97-bd16-f1b6f0fbca62 (facts) has been started and output is visible here. 2026-03-09 00:28:13.790181 | orchestrator | 2026-03-09 00:28:13.790285 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-09 00:28:13.790297 | orchestrator | 2026-03-09 00:28:13.790304 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-09 00:28:13.790311 | orchestrator | Monday 09 March 2026 00:27:34 +0000 (0:00:00.088) 0:00:00.088 ********** 2026-03-09 00:28:13.790318 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:13.790326 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:28:13.790333 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:28:13.790340 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:28:13.790346 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:28:13.790352 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:28:13.790383 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:28:13.790390 | orchestrator | 2026-03-09 00:28:13.790397 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-09 00:28:13.790403 | orchestrator | Monday 09 March 2026 00:27:35 +0000 (0:00:01.398) 0:00:01.486 ********** 2026-03-09 00:28:13.790409 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:13.790416 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:28:13.790422 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:28:13.790428 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:28:13.790434 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:28:13.790440 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:28:13.790446 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:28:13.790452 | orchestrator | 2026-03-09 00:28:13.790458 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-09 00:28:13.790464 | orchestrator | 2026-03-09 00:28:13.790470 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-09 00:28:13.790476 | orchestrator | Monday 09 March 2026 00:27:36 +0000 (0:00:01.214) 0:00:02.701 ********** 2026-03-09 00:28:13.790482 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:13.790488 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:13.790494 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:13.790500 | orchestrator | 2026-03-09 00:28:13.790507 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-09 00:28:13.790514 | orchestrator | Monday 09 March 2026 00:27:36 +0000 (0:00:00.116) 0:00:02.818 ********** 2026-03-09 00:28:13.790520 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:13.790526 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:13.790532 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:13.790538 | orchestrator | 2026-03-09 00:28:13.790544 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-09 00:28:13.790550 | orchestrator | Monday 09 March 2026 00:27:37 +0000 (0:00:00.226) 0:00:03.045 ********** 2026-03-09 00:28:13.790556 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:13.790562 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:13.790568 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:13.790574 | orchestrator | 2026-03-09 00:28:13.790580 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-09 00:28:13.790586 | orchestrator | Monday 09 March 2026 00:27:37 +0000 (0:00:00.242) 0:00:03.287 ********** 2026-03-09 00:28:13.790594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:28:13.790601 | orchestrator | 2026-03-09 00:28:13.790618 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-09 00:28:13.790624 | orchestrator | Monday 09 March 2026 00:27:37 +0000 (0:00:00.140) 0:00:03.428 ********** 2026-03-09 00:28:13.790630 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:13.790636 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:13.790642 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:13.790648 | orchestrator | 2026-03-09 00:28:13.790654 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-09 00:28:13.790660 | orchestrator | Monday 09 March 2026 00:27:37 +0000 (0:00:00.457) 0:00:03.886 ********** 2026-03-09 00:28:13.790666 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:28:13.790672 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:28:13.790679 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:28:13.790685 | orchestrator | 2026-03-09 00:28:13.790691 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-09 00:28:13.790697 | orchestrator | Monday 09 March 2026 00:27:38 +0000 (0:00:00.167) 0:00:04.053 ********** 2026-03-09 00:28:13.790703 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:28:13.790709 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:28:13.790715 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:28:13.790721 | orchestrator | 2026-03-09 00:28:13.790727 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-09 00:28:13.790738 | orchestrator | Monday 09 March 2026 00:27:39 +0000 (0:00:01.056) 0:00:05.109 ********** 2026-03-09 00:28:13.790744 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:13.790751 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:13.790757 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:13.790763 | orchestrator | 2026-03-09 00:28:13.790769 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-09 00:28:13.790775 | orchestrator | Monday 09 March 2026 00:27:39 +0000 (0:00:00.460) 0:00:05.570 ********** 2026-03-09 00:28:13.790781 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:28:13.790787 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:28:13.790793 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:28:13.790799 | orchestrator | 2026-03-09 00:28:13.790805 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-09 00:28:13.790875 | orchestrator | Monday 09 March 2026 00:27:40 +0000 (0:00:01.011) 0:00:06.582 ********** 2026-03-09 00:28:13.790883 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:28:13.790889 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:28:13.790895 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:28:13.790902 | orchestrator | 2026-03-09 00:28:13.790908 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-09 00:28:13.790914 | orchestrator | Monday 09 March 2026 00:27:56 +0000 (0:00:15.458) 0:00:22.040 ********** 2026-03-09 00:28:13.790920 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:28:13.790926 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:28:13.790932 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:28:13.790938 | orchestrator | 2026-03-09 00:28:13.790944 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-09 00:28:13.790965 | orchestrator | Monday 09 March 2026 00:27:56 +0000 (0:00:00.097) 0:00:22.137 ********** 2026-03-09 00:28:13.790971 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:28:13.791016 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:28:13.791023 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:28:13.791029 | orchestrator | 2026-03-09 00:28:13.791035 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-09 00:28:13.791045 | orchestrator | Monday 09 March 2026 00:28:04 +0000 (0:00:08.426) 0:00:30.563 ********** 2026-03-09 00:28:13.791051 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:13.791058 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:13.791064 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:13.791070 | orchestrator | 2026-03-09 00:28:13.791076 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-09 00:28:13.791082 | orchestrator | Monday 09 March 2026 00:28:05 +0000 (0:00:00.464) 0:00:31.028 ********** 2026-03-09 00:28:13.791088 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-09 00:28:13.791095 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-09 00:28:13.791101 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-09 00:28:13.791108 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-09 00:28:13.791114 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-09 00:28:13.791120 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-09 00:28:13.791126 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-09 00:28:13.791132 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-09 00:28:13.791138 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-09 00:28:13.791144 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-09 00:28:13.791150 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-09 00:28:13.791156 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-09 00:28:13.791162 | orchestrator | 2026-03-09 00:28:13.791168 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-09 00:28:13.791180 | orchestrator | Monday 09 March 2026 00:28:08 +0000 (0:00:03.688) 0:00:34.717 ********** 2026-03-09 00:28:13.791186 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:13.791192 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:13.791198 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:13.791204 | orchestrator | 2026-03-09 00:28:13.791211 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-09 00:28:13.791217 | orchestrator | 2026-03-09 00:28:13.791223 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:28:13.791229 | orchestrator | Monday 09 March 2026 00:28:10 +0000 (0:00:01.320) 0:00:36.038 ********** 2026-03-09 00:28:13.791237 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:13.791247 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:13.791258 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:13.791267 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:13.791276 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:13.791286 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:13.791295 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:13.791305 | orchestrator | 2026-03-09 00:28:13.791314 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:28:13.791324 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:28:13.791335 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:28:13.791347 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:28:13.791358 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:28:13.791369 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:28:13.791376 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:28:13.791383 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:28:13.791389 | orchestrator | 2026-03-09 00:28:13.791395 | orchestrator | 2026-03-09 00:28:13.791401 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:28:13.791408 | orchestrator | Monday 09 March 2026 00:28:13 +0000 (0:00:03.676) 0:00:39.714 ********** 2026-03-09 00:28:13.791414 | orchestrator | =============================================================================== 2026-03-09 00:28:13.791420 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.46s 2026-03-09 00:28:13.791426 | orchestrator | Install required packages (Debian) -------------------------------------- 8.43s 2026-03-09 00:28:13.791432 | orchestrator | Copy fact files --------------------------------------------------------- 3.69s 2026-03-09 00:28:13.791438 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.68s 2026-03-09 00:28:13.791444 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2026-03-09 00:28:13.791450 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.32s 2026-03-09 00:28:13.791463 | orchestrator | Copy fact file ---------------------------------------------------------- 1.21s 2026-03-09 00:28:14.066953 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2026-03-09 00:28:14.067084 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.01s 2026-03-09 00:28:14.067124 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-03-09 00:28:14.067139 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-03-09 00:28:14.067178 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.46s 2026-03-09 00:28:14.067189 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2026-03-09 00:28:14.067200 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.23s 2026-03-09 00:28:14.067211 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.17s 2026-03-09 00:28:14.067222 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-03-09 00:28:14.067234 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-03-09 00:28:14.067245 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-03-09 00:28:14.443812 | orchestrator | + osism apply bootstrap 2026-03-09 00:28:26.585951 | orchestrator | 2026-03-09 00:28:26 | INFO  | Task ff25f830-9dbb-40c5-9fec-d451df04effd (bootstrap) was prepared for execution. 2026-03-09 00:28:26.586153 | orchestrator | 2026-03-09 00:28:26 | INFO  | It takes a moment until task ff25f830-9dbb-40c5-9fec-d451df04effd (bootstrap) has been started and output is visible here. 2026-03-09 00:28:42.952490 | orchestrator | 2026-03-09 00:28:42.952608 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-09 00:28:42.952625 | orchestrator | 2026-03-09 00:28:42.952638 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-09 00:28:42.952650 | orchestrator | Monday 09 March 2026 00:28:31 +0000 (0:00:00.169) 0:00:00.169 ********** 2026-03-09 00:28:42.952662 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:42.952673 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:42.952684 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:42.952695 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:42.952706 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:42.952717 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:42.952727 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:42.952738 | orchestrator | 2026-03-09 00:28:42.952750 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-09 00:28:42.952761 | orchestrator | 2026-03-09 00:28:42.952772 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:28:42.952783 | orchestrator | Monday 09 March 2026 00:28:31 +0000 (0:00:00.311) 0:00:00.480 ********** 2026-03-09 00:28:42.952794 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:42.952805 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:42.952815 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:42.952826 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:42.952837 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:42.952848 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:42.952884 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:42.952895 | orchestrator | 2026-03-09 00:28:42.952906 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-09 00:28:42.952917 | orchestrator | 2026-03-09 00:28:42.952928 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:28:42.952940 | orchestrator | Monday 09 March 2026 00:28:34 +0000 (0:00:03.532) 0:00:04.013 ********** 2026-03-09 00:28:42.952952 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-09 00:28:42.952963 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-09 00:28:42.952974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-09 00:28:42.952985 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-09 00:28:42.952996 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-09 00:28:42.953007 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-09 00:28:42.953018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:28:42.953029 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-09 00:28:42.953040 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-09 00:28:42.953076 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:28:42.953087 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-09 00:28:42.953099 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-09 00:28:42.953109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:28:42.953120 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-09 00:28:42.953131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-09 00:28:42.953142 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-09 00:28:42.953153 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-09 00:28:42.953164 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:28:42.953175 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-09 00:28:42.953186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-09 00:28:42.953196 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-09 00:28:42.953207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-09 00:28:42.953217 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-09 00:28:42.953228 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-09 00:28:42.953239 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-09 00:28:42.953250 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:28:42.953260 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:28:42.953271 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-09 00:28:42.953282 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-09 00:28:42.953292 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-09 00:28:42.953303 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-09 00:28:42.953314 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-09 00:28:42.953325 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-09 00:28:42.953336 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-09 00:28:42.953346 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-09 00:28:42.953357 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-09 00:28:42.953368 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-09 00:28:42.953378 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-09 00:28:42.953389 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-09 00:28:42.953400 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 00:28:42.953410 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-09 00:28:42.953421 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:28:42.953432 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-09 00:28:42.953442 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-09 00:28:42.953453 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 00:28:42.953464 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-09 00:28:42.953491 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-09 00:28:42.953503 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 00:28:42.953513 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:28:42.953524 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-09 00:28:42.953535 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-09 00:28:42.953546 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-09 00:28:42.953556 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-09 00:28:42.953567 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:28:42.953585 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-09 00:28:42.953612 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:28:42.953624 | orchestrator | 2026-03-09 00:28:42.953635 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-09 00:28:42.953646 | orchestrator | 2026-03-09 00:28:42.953657 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-09 00:28:42.953668 | orchestrator | Monday 09 March 2026 00:28:35 +0000 (0:00:00.547) 0:00:04.560 ********** 2026-03-09 00:28:42.953679 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:42.953690 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:42.953701 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:42.953712 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:42.953723 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:42.953733 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:42.953744 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:42.953755 | orchestrator | 2026-03-09 00:28:42.953766 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-09 00:28:42.953777 | orchestrator | Monday 09 March 2026 00:28:36 +0000 (0:00:01.200) 0:00:05.761 ********** 2026-03-09 00:28:42.953788 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:42.953799 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:42.953810 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:42.953821 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:42.953831 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:42.953842 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:42.953882 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:42.953894 | orchestrator | 2026-03-09 00:28:42.953905 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-09 00:28:42.953916 | orchestrator | Monday 09 March 2026 00:28:37 +0000 (0:00:01.157) 0:00:06.918 ********** 2026-03-09 00:28:42.953928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:28:42.953941 | orchestrator | 2026-03-09 00:28:42.953952 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-09 00:28:42.953963 | orchestrator | Monday 09 March 2026 00:28:38 +0000 (0:00:00.296) 0:00:07.215 ********** 2026-03-09 00:28:42.953974 | orchestrator | changed: [testbed-manager] 2026-03-09 00:28:42.953985 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:28:42.953996 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:28:42.954006 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:28:42.954074 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:28:42.954087 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:28:42.954098 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:28:42.954108 | orchestrator | 2026-03-09 00:28:42.954119 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-09 00:28:42.954130 | orchestrator | Monday 09 March 2026 00:28:40 +0000 (0:00:02.146) 0:00:09.362 ********** 2026-03-09 00:28:42.954141 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:28:42.954153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:28:42.954166 | orchestrator | 2026-03-09 00:28:42.954186 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-09 00:28:42.954198 | orchestrator | Monday 09 March 2026 00:28:40 +0000 (0:00:00.312) 0:00:09.674 ********** 2026-03-09 00:28:42.954209 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:28:42.954220 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:28:42.954231 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:28:42.954241 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:28:42.954252 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:28:42.954263 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:28:42.954282 | orchestrator | 2026-03-09 00:28:42.954298 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-09 00:28:42.954310 | orchestrator | Monday 09 March 2026 00:28:41 +0000 (0:00:01.069) 0:00:10.744 ********** 2026-03-09 00:28:42.954321 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:28:42.954332 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:28:42.954342 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:28:42.954353 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:28:42.954364 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:28:42.954375 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:28:42.954385 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:28:42.954396 | orchestrator | 2026-03-09 00:28:42.954407 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-09 00:28:42.954418 | orchestrator | Monday 09 March 2026 00:28:42 +0000 (0:00:00.600) 0:00:11.345 ********** 2026-03-09 00:28:42.954429 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:28:42.954439 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:28:42.954450 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:28:42.954461 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:28:42.954471 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:28:42.954482 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:28:42.954493 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:42.954504 | orchestrator | 2026-03-09 00:28:42.954515 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-09 00:28:42.954527 | orchestrator | Monday 09 March 2026 00:28:42 +0000 (0:00:00.475) 0:00:11.821 ********** 2026-03-09 00:28:42.954538 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:28:42.954549 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:28:42.954568 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:28:55.293108 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:28:55.293212 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:28:55.293227 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:28:55.293239 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:28:55.293251 | orchestrator | 2026-03-09 00:28:55.293264 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-09 00:28:55.293277 | orchestrator | Monday 09 March 2026 00:28:43 +0000 (0:00:00.283) 0:00:12.104 ********** 2026-03-09 00:28:55.293290 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:28:55.293319 | orchestrator | 2026-03-09 00:28:55.293330 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-09 00:28:55.293342 | orchestrator | Monday 09 March 2026 00:28:43 +0000 (0:00:00.307) 0:00:12.412 ********** 2026-03-09 00:28:55.293353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:28:55.293364 | orchestrator | 2026-03-09 00:28:55.293375 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-09 00:28:55.293386 | orchestrator | Monday 09 March 2026 00:28:43 +0000 (0:00:00.295) 0:00:12.708 ********** 2026-03-09 00:28:55.293397 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:55.293408 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:55.293419 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:55.293429 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:55.293441 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:55.293453 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:55.293463 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:55.293474 | orchestrator | 2026-03-09 00:28:55.293485 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-09 00:28:55.293495 | orchestrator | Monday 09 March 2026 00:28:45 +0000 (0:00:01.625) 0:00:14.333 ********** 2026-03-09 00:28:55.293533 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:28:55.293545 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:28:55.293556 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:28:55.293574 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:28:55.293593 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:28:55.293612 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:28:55.293631 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:28:55.293650 | orchestrator | 2026-03-09 00:28:55.293669 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-09 00:28:55.293691 | orchestrator | Monday 09 March 2026 00:28:45 +0000 (0:00:00.220) 0:00:14.553 ********** 2026-03-09 00:28:55.293712 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:55.293735 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:55.293756 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:55.293775 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:55.293794 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:55.293815 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:55.293835 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:55.293856 | orchestrator | 2026-03-09 00:28:55.293897 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-09 00:28:55.293911 | orchestrator | Monday 09 March 2026 00:28:46 +0000 (0:00:00.547) 0:00:15.100 ********** 2026-03-09 00:28:55.293924 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:28:55.293936 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:28:55.293948 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:28:55.293960 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:28:55.293973 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:28:55.293984 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:28:55.293996 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:28:55.294006 | orchestrator | 2026-03-09 00:28:55.294071 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-09 00:28:55.294085 | orchestrator | Monday 09 March 2026 00:28:46 +0000 (0:00:00.317) 0:00:15.417 ********** 2026-03-09 00:28:55.294096 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:28:55.294107 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:55.294118 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:28:55.294129 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:28:55.294139 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:28:55.294150 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:28:55.294171 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:28:55.294183 | orchestrator | 2026-03-09 00:28:55.294193 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-09 00:28:55.294204 | orchestrator | Monday 09 March 2026 00:28:46 +0000 (0:00:00.565) 0:00:15.983 ********** 2026-03-09 00:28:55.294215 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:55.294226 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:28:55.294236 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:28:55.294247 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:28:55.294257 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:28:55.294268 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:28:55.294278 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:28:55.294289 | orchestrator | 2026-03-09 00:28:55.294300 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-09 00:28:55.294310 | orchestrator | Monday 09 March 2026 00:28:48 +0000 (0:00:01.059) 0:00:17.043 ********** 2026-03-09 00:28:55.294321 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:55.294332 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:55.294342 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:55.294353 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:55.294364 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:55.294374 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:55.294385 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:55.294395 | orchestrator | 2026-03-09 00:28:55.294406 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-09 00:28:55.294428 | orchestrator | Monday 09 March 2026 00:28:49 +0000 (0:00:01.041) 0:00:18.084 ********** 2026-03-09 00:28:55.294460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:28:55.294473 | orchestrator | 2026-03-09 00:28:55.294484 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-09 00:28:55.294495 | orchestrator | Monday 09 March 2026 00:28:49 +0000 (0:00:00.304) 0:00:18.389 ********** 2026-03-09 00:28:55.294505 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:28:55.294516 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:28:55.294527 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:28:55.294538 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:28:55.294548 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:28:55.294559 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:28:55.294570 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:28:55.294580 | orchestrator | 2026-03-09 00:28:55.294591 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-09 00:28:55.294602 | orchestrator | Monday 09 March 2026 00:28:50 +0000 (0:00:01.347) 0:00:19.736 ********** 2026-03-09 00:28:55.294613 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:55.294623 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:55.294634 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:55.294644 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:55.294655 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:55.294666 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:55.294677 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:55.294687 | orchestrator | 2026-03-09 00:28:55.294698 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-09 00:28:55.294709 | orchestrator | Monday 09 March 2026 00:28:50 +0000 (0:00:00.245) 0:00:19.982 ********** 2026-03-09 00:28:55.294720 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:55.294730 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:55.294741 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:55.294751 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:55.294762 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:55.294772 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:55.294783 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:55.294795 | orchestrator | 2026-03-09 00:28:55.294815 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-09 00:28:55.294832 | orchestrator | Monday 09 March 2026 00:28:51 +0000 (0:00:00.251) 0:00:20.234 ********** 2026-03-09 00:28:55.294853 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:55.294897 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:55.294910 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:55.294921 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:55.294931 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:55.294942 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:55.294952 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:55.294963 | orchestrator | 2026-03-09 00:28:55.294974 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-09 00:28:55.294985 | orchestrator | Monday 09 March 2026 00:28:51 +0000 (0:00:00.249) 0:00:20.483 ********** 2026-03-09 00:28:55.294996 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:28:55.295009 | orchestrator | 2026-03-09 00:28:55.295020 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-09 00:28:55.295030 | orchestrator | Monday 09 March 2026 00:28:51 +0000 (0:00:00.300) 0:00:20.783 ********** 2026-03-09 00:28:55.295041 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:55.295052 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:55.295072 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:55.295083 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:55.295094 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:55.295104 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:55.295115 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:55.295125 | orchestrator | 2026-03-09 00:28:55.295136 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-09 00:28:55.295147 | orchestrator | Monday 09 March 2026 00:28:52 +0000 (0:00:00.492) 0:00:21.275 ********** 2026-03-09 00:28:55.295158 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:28:55.295168 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:28:55.295179 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:28:55.295190 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:28:55.295200 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:28:55.295211 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:28:55.295221 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:28:55.295232 | orchestrator | 2026-03-09 00:28:55.295243 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-09 00:28:55.295254 | orchestrator | Monday 09 March 2026 00:28:52 +0000 (0:00:00.248) 0:00:21.524 ********** 2026-03-09 00:28:55.295265 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:55.295275 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:55.295286 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:55.295297 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:55.295308 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:28:55.295318 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:28:55.295329 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:28:55.295340 | orchestrator | 2026-03-09 00:28:55.295350 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-09 00:28:55.295361 | orchestrator | Monday 09 March 2026 00:28:53 +0000 (0:00:00.969) 0:00:22.494 ********** 2026-03-09 00:28:55.295372 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:55.295383 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:55.295393 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:55.295404 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:55.295414 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:55.295425 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:55.295435 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:55.295446 | orchestrator | 2026-03-09 00:28:55.295457 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-09 00:28:55.295467 | orchestrator | Monday 09 March 2026 00:28:54 +0000 (0:00:00.559) 0:00:23.053 ********** 2026-03-09 00:28:55.295478 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:55.295489 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:55.295499 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:55.295518 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:55.295537 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:29:37.415811 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:29:37.415988 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:29:37.416014 | orchestrator | 2026-03-09 00:29:37.416027 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-09 00:29:37.416037 | orchestrator | Monday 09 March 2026 00:28:55 +0000 (0:00:01.248) 0:00:24.302 ********** 2026-03-09 00:29:37.416046 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:37.416056 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:37.416064 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:37.416073 | orchestrator | changed: [testbed-manager] 2026-03-09 00:29:37.416082 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:29:37.416091 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:29:37.416100 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:29:37.416109 | orchestrator | 2026-03-09 00:29:37.416117 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-09 00:29:37.416126 | orchestrator | Monday 09 March 2026 00:29:11 +0000 (0:00:16.417) 0:00:40.720 ********** 2026-03-09 00:29:37.416135 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:37.416168 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:37.416180 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:37.416191 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:37.416201 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:37.416212 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:37.416223 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:37.416233 | orchestrator | 2026-03-09 00:29:37.416244 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-09 00:29:37.416255 | orchestrator | Monday 09 March 2026 00:29:11 +0000 (0:00:00.251) 0:00:40.972 ********** 2026-03-09 00:29:37.416266 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:37.416277 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:37.416288 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:37.416298 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:37.416309 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:37.416320 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:37.416330 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:37.416341 | orchestrator | 2026-03-09 00:29:37.416352 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-09 00:29:37.416365 | orchestrator | Monday 09 March 2026 00:29:12 +0000 (0:00:00.252) 0:00:41.224 ********** 2026-03-09 00:29:37.416379 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:37.416390 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:37.416402 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:37.416415 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:37.416426 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:37.416439 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:37.416451 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:37.416463 | orchestrator | 2026-03-09 00:29:37.416476 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-09 00:29:37.416488 | orchestrator | Monday 09 March 2026 00:29:12 +0000 (0:00:00.242) 0:00:41.467 ********** 2026-03-09 00:29:37.416503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:29:37.416517 | orchestrator | 2026-03-09 00:29:37.416530 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-09 00:29:37.416542 | orchestrator | Monday 09 March 2026 00:29:12 +0000 (0:00:00.320) 0:00:41.787 ********** 2026-03-09 00:29:37.416555 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:37.416567 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:37.416580 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:37.416590 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:37.416601 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:37.416611 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:37.416622 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:37.416632 | orchestrator | 2026-03-09 00:29:37.416643 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-09 00:29:37.416654 | orchestrator | Monday 09 March 2026 00:29:14 +0000 (0:00:01.791) 0:00:43.579 ********** 2026-03-09 00:29:37.416665 | orchestrator | changed: [testbed-manager] 2026-03-09 00:29:37.416676 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:29:37.416687 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:29:37.416697 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:29:37.416708 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:29:37.416718 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:29:37.416729 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:29:37.416739 | orchestrator | 2026-03-09 00:29:37.416750 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-09 00:29:37.416776 | orchestrator | Monday 09 March 2026 00:29:15 +0000 (0:00:01.114) 0:00:44.694 ********** 2026-03-09 00:29:37.416787 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:37.416798 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:37.416809 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:37.416827 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:37.416838 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:37.416848 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:37.416859 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:37.416869 | orchestrator | 2026-03-09 00:29:37.416880 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-09 00:29:37.416891 | orchestrator | Monday 09 March 2026 00:29:16 +0000 (0:00:00.841) 0:00:45.535 ********** 2026-03-09 00:29:37.416932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:29:37.416947 | orchestrator | 2026-03-09 00:29:37.416957 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-09 00:29:37.416969 | orchestrator | Monday 09 March 2026 00:29:16 +0000 (0:00:00.318) 0:00:45.854 ********** 2026-03-09 00:29:37.416980 | orchestrator | changed: [testbed-manager] 2026-03-09 00:29:37.416990 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:29:37.417001 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:29:37.417012 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:29:37.417023 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:29:37.417037 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:29:37.417056 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:29:37.417072 | orchestrator | 2026-03-09 00:29:37.417114 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-09 00:29:37.417134 | orchestrator | Monday 09 March 2026 00:29:17 +0000 (0:00:01.051) 0:00:46.905 ********** 2026-03-09 00:29:37.417152 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:29:37.417171 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:29:37.417190 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:29:37.417208 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:29:37.417227 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:29:37.417245 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:29:37.417264 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:29:37.417282 | orchestrator | 2026-03-09 00:29:37.417299 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-09 00:29:37.417318 | orchestrator | Monday 09 March 2026 00:29:18 +0000 (0:00:00.236) 0:00:47.142 ********** 2026-03-09 00:29:37.417337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:29:37.417391 | orchestrator | 2026-03-09 00:29:37.417411 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-09 00:29:37.417429 | orchestrator | Monday 09 March 2026 00:29:18 +0000 (0:00:00.326) 0:00:47.468 ********** 2026-03-09 00:29:37.417448 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:37.417467 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:37.417484 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:37.417502 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:37.417520 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:37.417538 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:37.417556 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:37.417573 | orchestrator | 2026-03-09 00:29:37.417590 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-09 00:29:37.417609 | orchestrator | Monday 09 March 2026 00:29:20 +0000 (0:00:01.806) 0:00:49.275 ********** 2026-03-09 00:29:37.417627 | orchestrator | changed: [testbed-manager] 2026-03-09 00:29:37.417646 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:29:37.417664 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:29:37.417682 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:29:37.417701 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:29:37.417713 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:29:37.417723 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:29:37.417747 | orchestrator | 2026-03-09 00:29:37.417758 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-09 00:29:37.417769 | orchestrator | Monday 09 March 2026 00:29:21 +0000 (0:00:01.146) 0:00:50.421 ********** 2026-03-09 00:29:37.417780 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:29:37.417791 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:29:37.417801 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:29:37.417812 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:29:37.417823 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:29:37.417834 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:29:37.417844 | orchestrator | changed: [testbed-manager] 2026-03-09 00:29:37.417855 | orchestrator | 2026-03-09 00:29:37.417865 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-09 00:29:37.417876 | orchestrator | Monday 09 March 2026 00:29:34 +0000 (0:00:13.222) 0:01:03.643 ********** 2026-03-09 00:29:37.417887 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:37.417898 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:37.417972 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:37.417983 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:37.417994 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:37.418004 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:37.418078 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:37.418092 | orchestrator | 2026-03-09 00:29:37.418103 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-09 00:29:37.418114 | orchestrator | Monday 09 March 2026 00:29:35 +0000 (0:00:01.053) 0:01:04.697 ********** 2026-03-09 00:29:37.418125 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:37.418136 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:37.418147 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:37.418157 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:37.418168 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:37.418178 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:37.418189 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:37.418199 | orchestrator | 2026-03-09 00:29:37.418210 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-09 00:29:37.418221 | orchestrator | Monday 09 March 2026 00:29:36 +0000 (0:00:00.915) 0:01:05.613 ********** 2026-03-09 00:29:37.418242 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:37.418253 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:37.418264 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:37.418274 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:37.418285 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:37.418296 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:37.418306 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:37.418317 | orchestrator | 2026-03-09 00:29:37.418328 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-09 00:29:37.418339 | orchestrator | Monday 09 March 2026 00:29:36 +0000 (0:00:00.238) 0:01:05.851 ********** 2026-03-09 00:29:37.418350 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:37.418360 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:37.418371 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:37.418381 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:37.418392 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:37.418403 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:37.418413 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:37.418424 | orchestrator | 2026-03-09 00:29:37.418435 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-09 00:29:37.418446 | orchestrator | Monday 09 March 2026 00:29:37 +0000 (0:00:00.253) 0:01:06.105 ********** 2026-03-09 00:29:37.418457 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:29:37.418470 | orchestrator | 2026-03-09 00:29:37.418496 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-09 00:32:02.978213 | orchestrator | Monday 09 March 2026 00:29:37 +0000 (0:00:00.322) 0:01:06.427 ********** 2026-03-09 00:32:02.978321 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:02.978338 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:02.978356 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:02.978372 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:02.978384 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:02.978395 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:02.978405 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:02.978416 | orchestrator | 2026-03-09 00:32:02.978429 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-09 00:32:02.978440 | orchestrator | Monday 09 March 2026 00:29:39 +0000 (0:00:01.626) 0:01:08.054 ********** 2026-03-09 00:32:02.978452 | orchestrator | changed: [testbed-manager] 2026-03-09 00:32:02.978463 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:02.978475 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:02.978485 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:02.978496 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:02.978507 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:02.978518 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:02.978529 | orchestrator | 2026-03-09 00:32:02.978540 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-09 00:32:02.978552 | orchestrator | Monday 09 March 2026 00:29:39 +0000 (0:00:00.570) 0:01:08.625 ********** 2026-03-09 00:32:02.978599 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:02.978611 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:02.978622 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:02.978633 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:02.978644 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:02.978655 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:02.978665 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:02.978676 | orchestrator | 2026-03-09 00:32:02.978688 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-09 00:32:02.978700 | orchestrator | Monday 09 March 2026 00:29:39 +0000 (0:00:00.241) 0:01:08.867 ********** 2026-03-09 00:32:02.978711 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:02.978722 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:02.978733 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:02.978744 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:02.978755 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:02.978766 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:02.978777 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:02.978788 | orchestrator | 2026-03-09 00:32:02.978799 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-09 00:32:02.978810 | orchestrator | Monday 09 March 2026 00:29:41 +0000 (0:00:01.191) 0:01:10.058 ********** 2026-03-09 00:32:02.978821 | orchestrator | changed: [testbed-manager] 2026-03-09 00:32:02.978832 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:02.978843 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:02.978854 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:02.978865 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:02.978876 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:02.978887 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:02.978898 | orchestrator | 2026-03-09 00:32:02.978914 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-09 00:32:02.978925 | orchestrator | Monday 09 March 2026 00:29:42 +0000 (0:00:01.610) 0:01:11.668 ********** 2026-03-09 00:32:02.978936 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:02.978947 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:02.978958 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:02.978969 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:02.978979 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:02.978990 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:02.979001 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:02.979012 | orchestrator | 2026-03-09 00:32:02.979023 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-09 00:32:02.979060 | orchestrator | Monday 09 March 2026 00:29:45 +0000 (0:00:02.375) 0:01:14.044 ********** 2026-03-09 00:32:02.979072 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:02.979083 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:02.979094 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:02.979105 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:02.979115 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:02.979126 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:02.979136 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:02.979147 | orchestrator | 2026-03-09 00:32:02.979158 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-09 00:32:02.979169 | orchestrator | Monday 09 March 2026 00:30:17 +0000 (0:00:32.700) 0:01:46.744 ********** 2026-03-09 00:32:02.979180 | orchestrator | changed: [testbed-manager] 2026-03-09 00:32:02.979190 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:02.979201 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:02.979212 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:02.979223 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:02.979234 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:02.979244 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:02.979255 | orchestrator | 2026-03-09 00:32:02.979266 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-09 00:32:02.979284 | orchestrator | Monday 09 March 2026 00:31:44 +0000 (0:01:26.720) 0:03:13.465 ********** 2026-03-09 00:32:02.979303 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:02.979321 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:02.979338 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:02.979356 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:02.979373 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:02.979391 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:02.979410 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:02.979428 | orchestrator | 2026-03-09 00:32:02.979446 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-09 00:32:02.979466 | orchestrator | Monday 09 March 2026 00:31:47 +0000 (0:00:02.835) 0:03:16.300 ********** 2026-03-09 00:32:02.979485 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:02.979503 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:02.979521 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:02.979532 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:02.979543 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:02.979617 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:02.979634 | orchestrator | changed: [testbed-manager] 2026-03-09 00:32:02.979645 | orchestrator | 2026-03-09 00:32:02.979656 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-09 00:32:02.979667 | orchestrator | Monday 09 March 2026 00:32:01 +0000 (0:00:14.225) 0:03:30.526 ********** 2026-03-09 00:32:02.979714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-09 00:32:02.979747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-09 00:32:02.979774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-09 00:32:02.979787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-09 00:32:02.979798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-09 00:32:02.979809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-09 00:32:02.979820 | orchestrator | 2026-03-09 00:32:02.979831 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-09 00:32:02.979842 | orchestrator | Monday 09 March 2026 00:32:02 +0000 (0:00:00.542) 0:03:31.068 ********** 2026-03-09 00:32:02.979853 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-09 00:32:02.979864 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-09 00:32:02.979875 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:02.979886 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-09 00:32:02.979897 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:32:02.979908 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:32:02.979923 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-09 00:32:02.979935 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:32:02.979944 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 00:32:02.979954 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 00:32:02.979963 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 00:32:02.979973 | orchestrator | 2026-03-09 00:32:02.979983 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-09 00:32:02.979992 | orchestrator | Monday 09 March 2026 00:32:02 +0000 (0:00:00.826) 0:03:31.895 ********** 2026-03-09 00:32:02.980002 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-09 00:32:02.980012 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-09 00:32:02.980022 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-09 00:32:02.980032 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-09 00:32:02.980041 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-09 00:32:02.980057 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-09 00:32:11.069659 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-09 00:32:11.069773 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-09 00:32:11.069817 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-09 00:32:11.069830 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-09 00:32:11.069842 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-09 00:32:11.069852 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-09 00:32:11.069863 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-09 00:32:11.069874 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-09 00:32:11.069885 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-09 00:32:11.069895 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-09 00:32:11.069907 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-09 00:32:11.069918 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-09 00:32:11.069928 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-09 00:32:11.069939 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-09 00:32:11.069950 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-09 00:32:11.069960 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-09 00:32:11.069971 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-09 00:32:11.069982 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-09 00:32:11.069993 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-09 00:32:11.070003 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-09 00:32:11.070071 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:11.070085 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-09 00:32:11.070096 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-09 00:32:11.070107 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-09 00:32:11.070118 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-09 00:32:11.070128 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-09 00:32:11.070139 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-09 00:32:11.070150 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-09 00:32:11.070160 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-09 00:32:11.070171 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-09 00:32:11.070195 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-09 00:32:11.070206 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-09 00:32:11.070217 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-09 00:32:11.070227 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:32:11.070238 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-09 00:32:11.070258 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-09 00:32:11.070269 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:32:11.070280 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:32:11.070291 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-09 00:32:11.070302 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-09 00:32:11.070313 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-09 00:32:11.070323 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-09 00:32:11.070334 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-09 00:32:11.070361 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-09 00:32:11.070373 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-09 00:32:11.070383 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-09 00:32:11.070394 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-09 00:32:11.070405 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-09 00:32:11.070416 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-09 00:32:11.070427 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-09 00:32:11.070438 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-09 00:32:11.070449 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-09 00:32:11.070460 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-09 00:32:11.070470 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-09 00:32:11.070481 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-09 00:32:11.070492 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-09 00:32:11.070503 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-09 00:32:11.070513 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-09 00:32:11.070524 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-09 00:32:11.070535 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-09 00:32:11.070546 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-09 00:32:11.070583 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-09 00:32:11.070595 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-09 00:32:11.070606 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-09 00:32:11.070617 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-09 00:32:11.070628 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-09 00:32:11.070639 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-09 00:32:11.070650 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-09 00:32:11.070669 | orchestrator | 2026-03-09 00:32:11.070681 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-09 00:32:11.070692 | orchestrator | Monday 09 March 2026 00:32:08 +0000 (0:00:06.095) 0:03:37.990 ********** 2026-03-09 00:32:11.070703 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:32:11.070713 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:32:11.070724 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:32:11.070735 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:32:11.070746 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:32:11.070762 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:32:11.070773 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:32:11.070784 | orchestrator | 2026-03-09 00:32:11.070795 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-09 00:32:11.070805 | orchestrator | Monday 09 March 2026 00:32:10 +0000 (0:00:01.567) 0:03:39.557 ********** 2026-03-09 00:32:11.070816 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:32:11.070827 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:11.070838 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:32:11.070849 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:32:11.070860 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:32:11.070871 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:32:11.070882 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:32:11.070893 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:32:11.070904 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:32:11.070915 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:32:11.070932 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:32:25.062226 | orchestrator | 2026-03-09 00:32:25.062343 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-09 00:32:25.062361 | orchestrator | Monday 09 March 2026 00:32:11 +0000 (0:00:00.522) 0:03:40.080 ********** 2026-03-09 00:32:25.062373 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:32:25.062386 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:32:25.062397 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:25.062408 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:32:25.062420 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:32:25.062431 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:32:25.062442 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:32:25.062452 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:32:25.062463 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:32:25.062474 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:32:25.062485 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:32:25.062496 | orchestrator | 2026-03-09 00:32:25.062507 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-09 00:32:25.062547 | orchestrator | Monday 09 March 2026 00:32:11 +0000 (0:00:00.658) 0:03:40.739 ********** 2026-03-09 00:32:25.062600 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-09 00:32:25.062620 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:25.062643 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-09 00:32:25.062670 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:32:25.062688 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-09 00:32:25.062706 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:32:25.062725 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-09 00:32:25.062743 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:32:25.062762 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-09 00:32:25.062780 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-09 00:32:25.062799 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-09 00:32:25.062818 | orchestrator | 2026-03-09 00:32:25.062839 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-09 00:32:25.062859 | orchestrator | Monday 09 March 2026 00:32:12 +0000 (0:00:00.683) 0:03:41.423 ********** 2026-03-09 00:32:25.062879 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:25.062895 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:32:25.062909 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:32:25.062921 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:32:25.062934 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:32:25.062947 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:32:25.062960 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:32:25.062973 | orchestrator | 2026-03-09 00:32:25.062987 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-09 00:32:25.063000 | orchestrator | Monday 09 March 2026 00:32:12 +0000 (0:00:00.390) 0:03:41.813 ********** 2026-03-09 00:32:25.063013 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:25.063027 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:25.063040 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:25.063052 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:25.063062 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:25.063073 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:25.063084 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:25.063094 | orchestrator | 2026-03-09 00:32:25.063106 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-09 00:32:25.063123 | orchestrator | Monday 09 March 2026 00:32:18 +0000 (0:00:05.962) 0:03:47.775 ********** 2026-03-09 00:32:25.063141 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-09 00:32:25.063157 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-09 00:32:25.063174 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:25.063190 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-09 00:32:25.063201 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:32:25.063212 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-09 00:32:25.063223 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:32:25.063234 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:32:25.063246 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-09 00:32:25.063257 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:32:25.063286 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-09 00:32:25.063297 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:32:25.063308 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-09 00:32:25.063319 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:32:25.063330 | orchestrator | 2026-03-09 00:32:25.063382 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-09 00:32:25.063394 | orchestrator | Monday 09 March 2026 00:32:19 +0000 (0:00:00.367) 0:03:48.143 ********** 2026-03-09 00:32:25.063405 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-09 00:32:25.063416 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-09 00:32:25.063427 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-09 00:32:25.063460 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-09 00:32:25.063472 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-09 00:32:25.063483 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-09 00:32:25.063494 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-09 00:32:25.063505 | orchestrator | 2026-03-09 00:32:25.063516 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-09 00:32:25.063527 | orchestrator | Monday 09 March 2026 00:32:20 +0000 (0:00:01.122) 0:03:49.265 ********** 2026-03-09 00:32:25.063540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:32:25.063554 | orchestrator | 2026-03-09 00:32:25.063598 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-09 00:32:25.063611 | orchestrator | Monday 09 March 2026 00:32:20 +0000 (0:00:00.602) 0:03:49.868 ********** 2026-03-09 00:32:25.063621 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:25.063632 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:25.063643 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:25.063654 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:25.063664 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:25.063675 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:25.063686 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:25.063696 | orchestrator | 2026-03-09 00:32:25.063707 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-09 00:32:25.063718 | orchestrator | Monday 09 March 2026 00:32:22 +0000 (0:00:01.203) 0:03:51.071 ********** 2026-03-09 00:32:25.063729 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:25.063739 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:25.063750 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:25.063761 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:25.063771 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:25.063782 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:25.063792 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:25.063803 | orchestrator | 2026-03-09 00:32:25.063814 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-09 00:32:25.063825 | orchestrator | Monday 09 March 2026 00:32:22 +0000 (0:00:00.731) 0:03:51.803 ********** 2026-03-09 00:32:25.063835 | orchestrator | changed: [testbed-manager] 2026-03-09 00:32:25.063846 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:25.063857 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:25.063868 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:25.063878 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:25.063889 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:25.063900 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:25.063910 | orchestrator | 2026-03-09 00:32:25.063921 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-09 00:32:25.063932 | orchestrator | Monday 09 March 2026 00:32:23 +0000 (0:00:00.613) 0:03:52.417 ********** 2026-03-09 00:32:25.063943 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:25.063954 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:25.063964 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:25.063975 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:25.063986 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:25.063997 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:25.064007 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:25.064018 | orchestrator | 2026-03-09 00:32:25.064029 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-09 00:32:25.064048 | orchestrator | Monday 09 March 2026 00:32:24 +0000 (0:00:00.619) 0:03:53.036 ********** 2026-03-09 00:32:25.064069 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014712.7804465, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:32:25.064083 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014734.3722863, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:32:25.064096 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014751.9196353, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:32:25.064131 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014740.8850772, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:32:30.015754 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014746.6968699, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:32:30.015847 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014738.9188678, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:32:30.015859 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014737.611232, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:32:30.015891 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:32:30.015912 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:32:30.015921 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:32:30.015930 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:32:30.015960 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:32:30.015969 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:32:30.015977 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:32:30.015992 | orchestrator | 2026-03-09 00:32:30.016003 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-09 00:32:30.016012 | orchestrator | Monday 09 March 2026 00:32:25 +0000 (0:00:01.032) 0:03:54.069 ********** 2026-03-09 00:32:30.016021 | orchestrator | changed: [testbed-manager] 2026-03-09 00:32:30.016030 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:30.016038 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:30.016046 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:30.016054 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:30.016062 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:30.016070 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:30.016078 | orchestrator | 2026-03-09 00:32:30.016086 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-09 00:32:30.016094 | orchestrator | Monday 09 March 2026 00:32:26 +0000 (0:00:01.169) 0:03:55.238 ********** 2026-03-09 00:32:30.016102 | orchestrator | changed: [testbed-manager] 2026-03-09 00:32:30.016110 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:30.016119 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:30.016132 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:30.016145 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:30.016158 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:30.016170 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:30.016184 | orchestrator | 2026-03-09 00:32:30.016202 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-09 00:32:30.016216 | orchestrator | Monday 09 March 2026 00:32:27 +0000 (0:00:01.202) 0:03:56.441 ********** 2026-03-09 00:32:30.016229 | orchestrator | changed: [testbed-manager] 2026-03-09 00:32:30.016244 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:30.016252 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:30.016260 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:30.016268 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:30.016275 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:30.016283 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:30.016291 | orchestrator | 2026-03-09 00:32:30.016301 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-09 00:32:30.016310 | orchestrator | Monday 09 March 2026 00:32:28 +0000 (0:00:01.143) 0:03:57.584 ********** 2026-03-09 00:32:30.016319 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:30.016328 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:32:30.016338 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:32:30.016348 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:32:30.016357 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:32:30.016365 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:32:30.016374 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:32:30.016384 | orchestrator | 2026-03-09 00:32:30.016393 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-09 00:32:30.016403 | orchestrator | Monday 09 March 2026 00:32:28 +0000 (0:00:00.288) 0:03:57.872 ********** 2026-03-09 00:32:30.016423 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:30.016434 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:30.016458 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:30.016472 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:30.016496 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:30.016510 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:30.016525 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:30.016540 | orchestrator | 2026-03-09 00:32:30.016555 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-09 00:32:30.016608 | orchestrator | Monday 09 March 2026 00:32:29 +0000 (0:00:00.749) 0:03:58.622 ********** 2026-03-09 00:32:30.016620 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:32:30.016640 | orchestrator | 2026-03-09 00:32:30.016650 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-09 00:32:30.016666 | orchestrator | Monday 09 March 2026 00:32:30 +0000 (0:00:00.408) 0:03:59.030 ********** 2026-03-09 00:33:45.926610 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:45.926718 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:45.926736 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:45.926748 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:45.926759 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:45.926770 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:45.926781 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:45.926793 | orchestrator | 2026-03-09 00:33:45.926806 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-09 00:33:45.926818 | orchestrator | Monday 09 March 2026 00:32:37 +0000 (0:00:07.912) 0:04:06.943 ********** 2026-03-09 00:33:45.926829 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:45.926840 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:45.926851 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:45.926862 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:45.926873 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:45.926884 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:45.926894 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:45.926905 | orchestrator | 2026-03-09 00:33:45.926916 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-09 00:33:45.926927 | orchestrator | Monday 09 March 2026 00:32:39 +0000 (0:00:01.290) 0:04:08.234 ********** 2026-03-09 00:33:45.926938 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:45.926949 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:45.926960 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:45.926971 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:45.926981 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:45.926992 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:45.927003 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:45.927014 | orchestrator | 2026-03-09 00:33:45.927025 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-09 00:33:45.927036 | orchestrator | Monday 09 March 2026 00:32:40 +0000 (0:00:01.136) 0:04:09.370 ********** 2026-03-09 00:33:45.927047 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:45.927057 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:45.927068 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:45.927079 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:45.927090 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:45.927101 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:45.927115 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:45.927127 | orchestrator | 2026-03-09 00:33:45.927140 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-09 00:33:45.927153 | orchestrator | Monday 09 March 2026 00:32:40 +0000 (0:00:00.313) 0:04:09.684 ********** 2026-03-09 00:33:45.927165 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:45.927178 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:45.927191 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:45.927203 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:45.927216 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:45.927228 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:45.927240 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:45.927253 | orchestrator | 2026-03-09 00:33:45.927265 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-09 00:33:45.927279 | orchestrator | Monday 09 March 2026 00:32:40 +0000 (0:00:00.330) 0:04:10.015 ********** 2026-03-09 00:33:45.927292 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:45.927304 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:45.927317 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:45.927354 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:45.927367 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:45.927380 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:45.927392 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:45.927405 | orchestrator | 2026-03-09 00:33:45.927424 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-09 00:33:45.927442 | orchestrator | Monday 09 March 2026 00:32:41 +0000 (0:00:00.313) 0:04:10.328 ********** 2026-03-09 00:33:45.927462 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:45.927479 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:45.927496 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:45.927514 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:45.927532 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:45.927593 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:45.927615 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:45.927635 | orchestrator | 2026-03-09 00:33:45.927653 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-09 00:33:45.927667 | orchestrator | Monday 09 March 2026 00:32:46 +0000 (0:00:05.576) 0:04:15.905 ********** 2026-03-09 00:33:45.927681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:33:45.927695 | orchestrator | 2026-03-09 00:33:45.927706 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-09 00:33:45.927717 | orchestrator | Monday 09 March 2026 00:32:47 +0000 (0:00:00.430) 0:04:16.336 ********** 2026-03-09 00:33:45.927727 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-09 00:33:45.927738 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-09 00:33:45.927749 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-09 00:33:45.927760 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:45.927771 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-09 00:33:45.927799 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-09 00:33:45.927811 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-09 00:33:45.927822 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:45.927832 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:45.927843 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-09 00:33:45.927854 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-09 00:33:45.927864 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-09 00:33:45.927875 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-09 00:33:45.927886 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:45.927897 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-09 00:33:45.927908 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-09 00:33:45.927939 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:45.927951 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:45.927962 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-09 00:33:45.927972 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-09 00:33:45.927983 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:45.927994 | orchestrator | 2026-03-09 00:33:45.928005 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-09 00:33:45.928016 | orchestrator | Monday 09 March 2026 00:32:47 +0000 (0:00:00.328) 0:04:16.664 ********** 2026-03-09 00:33:45.928027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:33:45.928038 | orchestrator | 2026-03-09 00:33:45.928049 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-09 00:33:45.928071 | orchestrator | Monday 09 March 2026 00:32:48 +0000 (0:00:00.376) 0:04:17.040 ********** 2026-03-09 00:33:45.928082 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-09 00:33:45.928093 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-09 00:33:45.928104 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:45.928115 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-09 00:33:45.928125 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:45.928136 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:45.928147 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-09 00:33:45.928158 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-09 00:33:45.928168 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:45.928179 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-09 00:33:45.928190 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:45.928200 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:45.928211 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-09 00:33:45.928221 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:45.928232 | orchestrator | 2026-03-09 00:33:45.928243 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-09 00:33:45.928254 | orchestrator | Monday 09 March 2026 00:32:48 +0000 (0:00:00.371) 0:04:17.412 ********** 2026-03-09 00:33:45.928265 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:33:45.928276 | orchestrator | 2026-03-09 00:33:45.928287 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-09 00:33:45.928299 | orchestrator | Monday 09 March 2026 00:32:48 +0000 (0:00:00.425) 0:04:17.838 ********** 2026-03-09 00:33:45.928318 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:45.928336 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:45.928354 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:45.928371 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:45.928395 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:45.928412 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:45.928429 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:45.928446 | orchestrator | 2026-03-09 00:33:45.928464 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-09 00:33:45.928482 | orchestrator | Monday 09 March 2026 00:33:22 +0000 (0:00:34.144) 0:04:51.982 ********** 2026-03-09 00:33:45.928500 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:45.928517 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:45.928535 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:45.928578 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:45.928601 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:45.928618 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:45.928635 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:45.928646 | orchestrator | 2026-03-09 00:33:45.928657 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-09 00:33:45.928668 | orchestrator | Monday 09 March 2026 00:33:30 +0000 (0:00:07.814) 0:04:59.797 ********** 2026-03-09 00:33:45.928679 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:45.928689 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:45.928699 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:45.928710 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:45.928720 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:45.928731 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:45.928741 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:45.928752 | orchestrator | 2026-03-09 00:33:45.928762 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-09 00:33:45.928793 | orchestrator | Monday 09 March 2026 00:33:38 +0000 (0:00:07.764) 0:05:07.561 ********** 2026-03-09 00:33:45.928804 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:45.928815 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:45.928826 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:45.928836 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:45.928847 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:45.928857 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:45.928868 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:45.928878 | orchestrator | 2026-03-09 00:33:45.928889 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-09 00:33:45.928899 | orchestrator | Monday 09 March 2026 00:33:40 +0000 (0:00:01.721) 0:05:09.283 ********** 2026-03-09 00:33:45.928910 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:45.928921 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:45.928931 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:45.928942 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:45.928953 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:45.928963 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:45.928974 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:45.928985 | orchestrator | 2026-03-09 00:33:45.929006 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-09 00:33:57.195958 | orchestrator | Monday 09 March 2026 00:33:45 +0000 (0:00:05.650) 0:05:14.933 ********** 2026-03-09 00:33:57.196067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:33:57.196085 | orchestrator | 2026-03-09 00:33:57.196097 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-09 00:33:57.196108 | orchestrator | Monday 09 March 2026 00:33:46 +0000 (0:00:00.561) 0:05:15.495 ********** 2026-03-09 00:33:57.196119 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:57.196131 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:57.196142 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:57.196152 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:57.196162 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:57.196173 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:57.196184 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:57.196194 | orchestrator | 2026-03-09 00:33:57.196205 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-09 00:33:57.196216 | orchestrator | Monday 09 March 2026 00:33:47 +0000 (0:00:00.725) 0:05:16.221 ********** 2026-03-09 00:33:57.196227 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:57.196238 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:57.196248 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:57.196259 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:57.196269 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:57.196279 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:57.196289 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:57.196299 | orchestrator | 2026-03-09 00:33:57.196310 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-09 00:33:57.196320 | orchestrator | Monday 09 March 2026 00:33:48 +0000 (0:00:01.633) 0:05:17.854 ********** 2026-03-09 00:33:57.196331 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:57.196342 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:57.196352 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:57.196363 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:57.196372 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:57.196383 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:57.196394 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:57.196405 | orchestrator | 2026-03-09 00:33:57.196415 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-09 00:33:57.196426 | orchestrator | Monday 09 March 2026 00:33:49 +0000 (0:00:00.825) 0:05:18.679 ********** 2026-03-09 00:33:57.196461 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:57.196473 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:57.196483 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:57.196493 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:57.196503 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:57.196514 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:57.196525 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:57.196535 | orchestrator | 2026-03-09 00:33:57.196567 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-09 00:33:57.196578 | orchestrator | Monday 09 March 2026 00:33:49 +0000 (0:00:00.293) 0:05:18.973 ********** 2026-03-09 00:33:57.196589 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:57.196600 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:57.196610 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:57.196636 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:57.196647 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:57.196658 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:57.196669 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:57.196677 | orchestrator | 2026-03-09 00:33:57.196685 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-09 00:33:57.196693 | orchestrator | Monday 09 March 2026 00:33:50 +0000 (0:00:00.425) 0:05:19.398 ********** 2026-03-09 00:33:57.196700 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:57.196708 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:57.196714 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:57.196719 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:57.196725 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:57.196731 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:57.196737 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:57.196743 | orchestrator | 2026-03-09 00:33:57.196749 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-09 00:33:57.196756 | orchestrator | Monday 09 March 2026 00:33:50 +0000 (0:00:00.290) 0:05:19.688 ********** 2026-03-09 00:33:57.196762 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:57.196767 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:57.196773 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:57.196779 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:57.196785 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:57.196791 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:57.196797 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:57.196803 | orchestrator | 2026-03-09 00:33:57.196809 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-09 00:33:57.196816 | orchestrator | Monday 09 March 2026 00:33:50 +0000 (0:00:00.293) 0:05:19.982 ********** 2026-03-09 00:33:57.196822 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:57.196828 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:57.196834 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:57.196840 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:57.196846 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:57.196852 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:57.196858 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:57.196864 | orchestrator | 2026-03-09 00:33:57.196870 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-09 00:33:57.196876 | orchestrator | Monday 09 March 2026 00:33:51 +0000 (0:00:00.308) 0:05:20.290 ********** 2026-03-09 00:33:57.196882 | orchestrator | ok: [testbed-manager] =>  2026-03-09 00:33:57.196888 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:33:57.196894 | orchestrator | ok: [testbed-node-3] =>  2026-03-09 00:33:57.196900 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:33:57.196906 | orchestrator | ok: [testbed-node-4] =>  2026-03-09 00:33:57.196915 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:33:57.196925 | orchestrator | ok: [testbed-node-5] =>  2026-03-09 00:33:57.196935 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:33:57.196964 | orchestrator | ok: [testbed-node-0] =>  2026-03-09 00:33:57.196984 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:33:57.196996 | orchestrator | ok: [testbed-node-1] =>  2026-03-09 00:33:57.197006 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:33:57.197017 | orchestrator | ok: [testbed-node-2] =>  2026-03-09 00:33:57.197028 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:33:57.197038 | orchestrator | 2026-03-09 00:33:57.197048 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-09 00:33:57.197057 | orchestrator | Monday 09 March 2026 00:33:51 +0000 (0:00:00.294) 0:05:20.585 ********** 2026-03-09 00:33:57.197067 | orchestrator | ok: [testbed-manager] =>  2026-03-09 00:33:57.197076 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:33:57.197086 | orchestrator | ok: [testbed-node-3] =>  2026-03-09 00:33:57.197096 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:33:57.197106 | orchestrator | ok: [testbed-node-4] =>  2026-03-09 00:33:57.197117 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:33:57.197127 | orchestrator | ok: [testbed-node-5] =>  2026-03-09 00:33:57.197137 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:33:57.197148 | orchestrator | ok: [testbed-node-0] =>  2026-03-09 00:33:57.197157 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:33:57.197163 | orchestrator | ok: [testbed-node-1] =>  2026-03-09 00:33:57.197169 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:33:57.197176 | orchestrator | ok: [testbed-node-2] =>  2026-03-09 00:33:57.197182 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:33:57.197188 | orchestrator | 2026-03-09 00:33:57.197194 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-09 00:33:57.197200 | orchestrator | Monday 09 March 2026 00:33:51 +0000 (0:00:00.328) 0:05:20.913 ********** 2026-03-09 00:33:57.197206 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:57.197212 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:57.197218 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:57.197224 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:57.197230 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:57.197236 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:57.197242 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:57.197248 | orchestrator | 2026-03-09 00:33:57.197254 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-09 00:33:57.197260 | orchestrator | Monday 09 March 2026 00:33:52 +0000 (0:00:00.291) 0:05:21.204 ********** 2026-03-09 00:33:57.197265 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:57.197271 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:57.197278 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:57.197288 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:57.197298 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:57.197308 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:57.197319 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:57.197330 | orchestrator | 2026-03-09 00:33:57.197340 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-09 00:33:57.197350 | orchestrator | Monday 09 March 2026 00:33:52 +0000 (0:00:00.293) 0:05:21.498 ********** 2026-03-09 00:33:57.197363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:33:57.197375 | orchestrator | 2026-03-09 00:33:57.197391 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-09 00:33:57.197403 | orchestrator | Monday 09 March 2026 00:33:52 +0000 (0:00:00.463) 0:05:21.961 ********** 2026-03-09 00:33:57.197413 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:57.197423 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:57.197433 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:57.197443 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:57.197454 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:57.197472 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:57.197483 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:57.197493 | orchestrator | 2026-03-09 00:33:57.197504 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-09 00:33:57.197511 | orchestrator | Monday 09 March 2026 00:33:53 +0000 (0:00:00.977) 0:05:22.938 ********** 2026-03-09 00:33:57.197517 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:57.197523 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:57.197529 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:57.197535 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:57.197541 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:57.197587 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:57.197599 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:57.197609 | orchestrator | 2026-03-09 00:33:57.197619 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-09 00:33:57.197631 | orchestrator | Monday 09 March 2026 00:33:56 +0000 (0:00:02.874) 0:05:25.813 ********** 2026-03-09 00:33:57.197641 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-09 00:33:57.197651 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-09 00:33:57.197662 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-09 00:33:57.197673 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-09 00:33:57.197683 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-09 00:33:57.197693 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-09 00:33:57.197703 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:57.197714 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-09 00:33:57.197724 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-09 00:33:57.197734 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-09 00:33:57.197744 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:57.197754 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-09 00:33:57.197765 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-09 00:33:57.197775 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-09 00:33:57.197785 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:57.197796 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-09 00:33:57.197819 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-09 00:34:56.936462 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-09 00:34:56.936590 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:34:56.936603 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-09 00:34:56.936611 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-09 00:34:56.936619 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-09 00:34:56.936626 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:34:56.936633 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:34:56.936640 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-09 00:34:56.936647 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-09 00:34:56.936654 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-09 00:34:56.936660 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:34:56.936668 | orchestrator | 2026-03-09 00:34:56.936676 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-09 00:34:56.936684 | orchestrator | Monday 09 March 2026 00:33:57 +0000 (0:00:00.615) 0:05:26.429 ********** 2026-03-09 00:34:56.936691 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:56.936697 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.936707 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.936718 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.936730 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.936741 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.936752 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.936789 | orchestrator | 2026-03-09 00:34:56.936800 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-09 00:34:56.936811 | orchestrator | Monday 09 March 2026 00:34:03 +0000 (0:00:06.490) 0:05:32.919 ********** 2026-03-09 00:34:56.936822 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.936833 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.936844 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:56.936856 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.936867 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.936879 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.936890 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.936902 | orchestrator | 2026-03-09 00:34:56.936913 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-09 00:34:56.936923 | orchestrator | Monday 09 March 2026 00:34:04 +0000 (0:00:01.041) 0:05:33.961 ********** 2026-03-09 00:34:56.936931 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:56.936937 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.936944 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.936950 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.936956 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.936963 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.936969 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.936976 | orchestrator | 2026-03-09 00:34:56.936983 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-09 00:34:56.936989 | orchestrator | Monday 09 March 2026 00:34:12 +0000 (0:00:07.781) 0:05:41.743 ********** 2026-03-09 00:34:56.936996 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.937002 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.937009 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:56.937017 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.937025 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.937033 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.937042 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.937049 | orchestrator | 2026-03-09 00:34:56.937057 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-09 00:34:56.937065 | orchestrator | Monday 09 March 2026 00:34:15 +0000 (0:00:03.269) 0:05:45.012 ********** 2026-03-09 00:34:56.937073 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:56.937080 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.937088 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.937096 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.937104 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.937111 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.937119 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.937126 | orchestrator | 2026-03-09 00:34:56.937134 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-09 00:34:56.937141 | orchestrator | Monday 09 March 2026 00:34:17 +0000 (0:00:01.326) 0:05:46.338 ********** 2026-03-09 00:34:56.937149 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:56.937157 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.937164 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.937172 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.937180 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.937187 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.937196 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.937203 | orchestrator | 2026-03-09 00:34:56.937211 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-09 00:34:56.937219 | orchestrator | Monday 09 March 2026 00:34:18 +0000 (0:00:01.574) 0:05:47.913 ********** 2026-03-09 00:34:56.937227 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:34:56.937234 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:34:56.937240 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:34:56.937247 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:34:56.937260 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:34:56.937266 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:34:56.937273 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:56.937280 | orchestrator | 2026-03-09 00:34:56.937286 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-09 00:34:56.937293 | orchestrator | Monday 09 March 2026 00:34:19 +0000 (0:00:00.630) 0:05:48.543 ********** 2026-03-09 00:34:56.937300 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:56.937307 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.937313 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.937320 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.937326 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.937333 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.937339 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.937349 | orchestrator | 2026-03-09 00:34:56.937360 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-09 00:34:56.937389 | orchestrator | Monday 09 March 2026 00:34:28 +0000 (0:00:09.325) 0:05:57.868 ********** 2026-03-09 00:34:56.937400 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:56.937412 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.937422 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.937432 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.937442 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.937452 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.937463 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.937474 | orchestrator | 2026-03-09 00:34:56.937486 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-09 00:34:56.937497 | orchestrator | Monday 09 March 2026 00:34:29 +0000 (0:00:00.955) 0:05:58.824 ********** 2026-03-09 00:34:56.937509 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:56.937520 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.937532 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.937544 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.937582 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.937590 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.937596 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.937603 | orchestrator | 2026-03-09 00:34:56.937610 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-09 00:34:56.937616 | orchestrator | Monday 09 March 2026 00:34:39 +0000 (0:00:09.310) 0:06:08.134 ********** 2026-03-09 00:34:56.937623 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:56.937629 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.937636 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.937642 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.937649 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.937655 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.937662 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.937668 | orchestrator | 2026-03-09 00:34:56.937675 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-09 00:34:56.937682 | orchestrator | Monday 09 March 2026 00:34:50 +0000 (0:00:11.070) 0:06:19.204 ********** 2026-03-09 00:34:56.937689 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-09 00:34:56.937695 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-09 00:34:56.937702 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-09 00:34:56.937708 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-09 00:34:56.937715 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-09 00:34:56.937721 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-09 00:34:56.937728 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-09 00:34:56.937734 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-09 00:34:56.937741 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-09 00:34:56.937748 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-09 00:34:56.937761 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-09 00:34:56.937812 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-09 00:34:56.937819 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-09 00:34:56.937826 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-09 00:34:56.937832 | orchestrator | 2026-03-09 00:34:56.937839 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-09 00:34:56.937846 | orchestrator | Monday 09 March 2026 00:34:51 +0000 (0:00:01.217) 0:06:20.422 ********** 2026-03-09 00:34:56.937868 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:34:56.937879 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:34:56.937891 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:34:56.937902 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:34:56.937913 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:34:56.937925 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:34:56.937935 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:34:56.937946 | orchestrator | 2026-03-09 00:34:56.937957 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-09 00:34:56.937966 | orchestrator | Monday 09 March 2026 00:34:51 +0000 (0:00:00.559) 0:06:20.981 ********** 2026-03-09 00:34:56.937976 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:56.937987 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:56.937999 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:56.938009 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:56.938080 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:56.938092 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:56.938102 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:56.938114 | orchestrator | 2026-03-09 00:34:56.938126 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-09 00:34:56.938139 | orchestrator | Monday 09 March 2026 00:34:55 +0000 (0:00:03.902) 0:06:24.884 ********** 2026-03-09 00:34:56.938151 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:34:56.938163 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:34:56.938173 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:34:56.938186 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:34:56.938198 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:34:56.938209 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:34:56.938220 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:34:56.938232 | orchestrator | 2026-03-09 00:34:56.938246 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-09 00:34:56.938258 | orchestrator | Monday 09 March 2026 00:34:56 +0000 (0:00:00.532) 0:06:25.417 ********** 2026-03-09 00:34:56.938270 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-09 00:34:56.938282 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-09 00:34:56.938294 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:34:56.938306 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-09 00:34:56.938317 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-09 00:34:56.938328 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:34:56.938334 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-09 00:34:56.938341 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-09 00:34:56.938347 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:34:56.938366 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-09 00:35:15.761473 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-09 00:35:15.761637 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:15.761654 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-09 00:35:15.761667 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-09 00:35:15.761679 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:15.761713 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-09 00:35:15.761725 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-09 00:35:15.761736 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:15.761747 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-09 00:35:15.761757 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-09 00:35:15.761768 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:15.761780 | orchestrator | 2026-03-09 00:35:15.761793 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-09 00:35:15.761805 | orchestrator | Monday 09 March 2026 00:34:57 +0000 (0:00:00.808) 0:06:26.225 ********** 2026-03-09 00:35:15.761816 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:15.761827 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:15.761939 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:15.761953 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:15.761964 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:15.761975 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:15.761986 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:15.761996 | orchestrator | 2026-03-09 00:35:15.762007 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-09 00:35:15.762068 | orchestrator | Monday 09 March 2026 00:34:57 +0000 (0:00:00.570) 0:06:26.796 ********** 2026-03-09 00:35:15.762085 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:15.762098 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:15.762111 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:15.762124 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:15.762135 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:15.762148 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:15.762159 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:15.762206 | orchestrator | 2026-03-09 00:35:15.762220 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-09 00:35:15.762234 | orchestrator | Monday 09 March 2026 00:34:58 +0000 (0:00:00.519) 0:06:27.315 ********** 2026-03-09 00:35:15.762247 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:15.762259 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:15.762272 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:15.762284 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:15.762296 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:15.762309 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:15.762322 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:15.762334 | orchestrator | 2026-03-09 00:35:15.762347 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-09 00:35:15.762360 | orchestrator | Monday 09 March 2026 00:34:58 +0000 (0:00:00.569) 0:06:27.885 ********** 2026-03-09 00:35:15.762374 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:15.762385 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:15.762396 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:15.762407 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:15.762418 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:15.762429 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:15.762439 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:15.762450 | orchestrator | 2026-03-09 00:35:15.762461 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-09 00:35:15.762472 | orchestrator | Monday 09 March 2026 00:35:00 +0000 (0:00:01.883) 0:06:29.768 ********** 2026-03-09 00:35:15.762484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:35:15.762497 | orchestrator | 2026-03-09 00:35:15.762508 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-09 00:35:15.762520 | orchestrator | Monday 09 March 2026 00:35:01 +0000 (0:00:00.918) 0:06:30.686 ********** 2026-03-09 00:35:15.762589 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:15.762601 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:35:15.762612 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:35:15.762623 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:35:15.762634 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:35:15.762644 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:35:15.762655 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:35:15.762666 | orchestrator | 2026-03-09 00:35:15.762677 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-09 00:35:15.762687 | orchestrator | Monday 09 March 2026 00:35:02 +0000 (0:00:00.852) 0:06:31.538 ********** 2026-03-09 00:35:15.762698 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:15.762709 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:35:15.762720 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:35:15.762730 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:35:15.762741 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:35:15.762752 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:35:15.762762 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:35:15.762773 | orchestrator | 2026-03-09 00:35:15.762784 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-09 00:35:15.762795 | orchestrator | Monday 09 March 2026 00:35:03 +0000 (0:00:00.836) 0:06:32.375 ********** 2026-03-09 00:35:15.762806 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:15.762816 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:35:15.762827 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:35:15.762838 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:35:15.762849 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:35:15.762859 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:35:15.762870 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:35:15.762880 | orchestrator | 2026-03-09 00:35:15.762891 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-09 00:35:15.762923 | orchestrator | Monday 09 March 2026 00:35:04 +0000 (0:00:01.534) 0:06:33.909 ********** 2026-03-09 00:35:15.762935 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:15.762966 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:15.762977 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:15.762989 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:15.763000 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:15.763010 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:15.763021 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:15.763032 | orchestrator | 2026-03-09 00:35:15.763043 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-09 00:35:15.763054 | orchestrator | Monday 09 March 2026 00:35:06 +0000 (0:00:01.343) 0:06:35.253 ********** 2026-03-09 00:35:15.763065 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:15.763075 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:35:15.763086 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:35:15.763097 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:35:15.763107 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:35:15.763118 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:35:15.763129 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:35:15.763140 | orchestrator | 2026-03-09 00:35:15.763151 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-09 00:35:15.763162 | orchestrator | Monday 09 March 2026 00:35:07 +0000 (0:00:01.198) 0:06:36.452 ********** 2026-03-09 00:35:15.763172 | orchestrator | changed: [testbed-manager] 2026-03-09 00:35:15.763183 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:35:15.763205 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:35:15.763216 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:35:15.763227 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:35:15.763238 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:35:15.763248 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:35:15.763259 | orchestrator | 2026-03-09 00:35:15.763278 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-09 00:35:15.763289 | orchestrator | Monday 09 March 2026 00:35:08 +0000 (0:00:01.289) 0:06:37.741 ********** 2026-03-09 00:35:15.763301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:35:15.763312 | orchestrator | 2026-03-09 00:35:15.763323 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-09 00:35:15.763334 | orchestrator | Monday 09 March 2026 00:35:09 +0000 (0:00:00.903) 0:06:38.645 ********** 2026-03-09 00:35:15.763348 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:15.763366 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:15.763384 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:15.763402 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:15.763418 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:15.763436 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:15.763453 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:15.763471 | orchestrator | 2026-03-09 00:35:15.763488 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-09 00:35:15.763506 | orchestrator | Monday 09 March 2026 00:35:10 +0000 (0:00:01.272) 0:06:39.918 ********** 2026-03-09 00:35:15.763524 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:15.763541 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:15.763585 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:15.763604 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:15.763623 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:15.763660 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:15.763673 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:15.763684 | orchestrator | 2026-03-09 00:35:15.763694 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-09 00:35:15.763705 | orchestrator | Monday 09 March 2026 00:35:11 +0000 (0:00:01.070) 0:06:40.988 ********** 2026-03-09 00:35:15.763716 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:15.763727 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:15.763737 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:15.763748 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:15.763758 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:15.763769 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:15.763779 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:15.763790 | orchestrator | 2026-03-09 00:35:15.763800 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-09 00:35:15.763811 | orchestrator | Monday 09 March 2026 00:35:13 +0000 (0:00:01.081) 0:06:42.070 ********** 2026-03-09 00:35:15.763822 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:15.763832 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:15.763843 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:15.763853 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:15.763864 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:15.763874 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:15.763885 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:15.763895 | orchestrator | 2026-03-09 00:35:15.763906 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-09 00:35:15.763917 | orchestrator | Monday 09 March 2026 00:35:14 +0000 (0:00:01.436) 0:06:43.506 ********** 2026-03-09 00:35:15.763928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:35:15.763939 | orchestrator | 2026-03-09 00:35:15.763949 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:35:15.763960 | orchestrator | Monday 09 March 2026 00:35:15 +0000 (0:00:00.971) 0:06:44.477 ********** 2026-03-09 00:35:15.763971 | orchestrator | 2026-03-09 00:35:15.763981 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:35:15.764001 | orchestrator | Monday 09 March 2026 00:35:15 +0000 (0:00:00.039) 0:06:44.517 ********** 2026-03-09 00:35:15.764012 | orchestrator | 2026-03-09 00:35:15.764023 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:35:15.764033 | orchestrator | Monday 09 March 2026 00:35:15 +0000 (0:00:00.039) 0:06:44.557 ********** 2026-03-09 00:35:15.764044 | orchestrator | 2026-03-09 00:35:15.764055 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:35:15.764077 | orchestrator | Monday 09 March 2026 00:35:15 +0000 (0:00:00.047) 0:06:44.604 ********** 2026-03-09 00:35:42.313481 | orchestrator | 2026-03-09 00:35:42.313660 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:35:42.313679 | orchestrator | Monday 09 March 2026 00:35:15 +0000 (0:00:00.039) 0:06:44.643 ********** 2026-03-09 00:35:42.313691 | orchestrator | 2026-03-09 00:35:42.313702 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:35:42.313713 | orchestrator | Monday 09 March 2026 00:35:15 +0000 (0:00:00.038) 0:06:44.682 ********** 2026-03-09 00:35:42.313724 | orchestrator | 2026-03-09 00:35:42.313736 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:35:42.313747 | orchestrator | Monday 09 March 2026 00:35:15 +0000 (0:00:00.046) 0:06:44.729 ********** 2026-03-09 00:35:42.313757 | orchestrator | 2026-03-09 00:35:42.313768 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-09 00:35:42.313779 | orchestrator | Monday 09 March 2026 00:35:15 +0000 (0:00:00.040) 0:06:44.769 ********** 2026-03-09 00:35:42.313790 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:42.313802 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:42.313813 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:42.313824 | orchestrator | 2026-03-09 00:35:42.313835 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-09 00:35:42.313846 | orchestrator | Monday 09 March 2026 00:35:16 +0000 (0:00:00.973) 0:06:45.742 ********** 2026-03-09 00:35:42.313857 | orchestrator | changed: [testbed-manager] 2026-03-09 00:35:42.313869 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:35:42.313880 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:35:42.313891 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:35:42.313901 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:35:42.313912 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:35:42.313922 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:35:42.313933 | orchestrator | 2026-03-09 00:35:42.313944 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-09 00:35:42.313955 | orchestrator | Monday 09 March 2026 00:35:18 +0000 (0:00:01.418) 0:06:47.160 ********** 2026-03-09 00:35:42.313966 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:35:42.313977 | orchestrator | changed: [testbed-manager] 2026-03-09 00:35:42.313988 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:35:42.313999 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:35:42.314010 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:35:42.314076 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:35:42.314089 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:35:42.314101 | orchestrator | 2026-03-09 00:35:42.314113 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-09 00:35:42.314125 | orchestrator | Monday 09 March 2026 00:35:19 +0000 (0:00:01.242) 0:06:48.403 ********** 2026-03-09 00:35:42.314138 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:42.314150 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:35:42.314163 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:35:42.314175 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:35:42.314188 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:35:42.314201 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:35:42.314214 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:35:42.314227 | orchestrator | 2026-03-09 00:35:42.314240 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-09 00:35:42.314251 | orchestrator | Monday 09 March 2026 00:35:21 +0000 (0:00:02.460) 0:06:50.863 ********** 2026-03-09 00:35:42.314289 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:42.314301 | orchestrator | 2026-03-09 00:35:42.314326 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-09 00:35:42.314338 | orchestrator | Monday 09 March 2026 00:35:21 +0000 (0:00:00.114) 0:06:50.978 ********** 2026-03-09 00:35:42.314349 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:42.314360 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:35:42.314371 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:35:42.314381 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:35:42.314392 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:35:42.314403 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:35:42.314413 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:35:42.314424 | orchestrator | 2026-03-09 00:35:42.314434 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-09 00:35:42.314446 | orchestrator | Monday 09 March 2026 00:35:23 +0000 (0:00:01.079) 0:06:52.057 ********** 2026-03-09 00:35:42.314457 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:42.314468 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:42.314478 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:42.314489 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:42.314499 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:42.314510 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:42.314521 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:42.314531 | orchestrator | 2026-03-09 00:35:42.314542 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-09 00:35:42.314585 | orchestrator | Monday 09 March 2026 00:35:23 +0000 (0:00:00.544) 0:06:52.601 ********** 2026-03-09 00:35:42.314597 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:35:42.314611 | orchestrator | 2026-03-09 00:35:42.314622 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-09 00:35:42.314633 | orchestrator | Monday 09 March 2026 00:35:24 +0000 (0:00:01.088) 0:06:53.690 ********** 2026-03-09 00:35:42.314644 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:42.314654 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:42.314665 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:42.314675 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:42.314686 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:42.314696 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:42.314708 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:42.314718 | orchestrator | 2026-03-09 00:35:42.314729 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-09 00:35:42.314740 | orchestrator | Monday 09 March 2026 00:35:25 +0000 (0:00:00.925) 0:06:54.616 ********** 2026-03-09 00:35:42.314750 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-09 00:35:42.314780 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-09 00:35:42.314792 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-09 00:35:42.314803 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-09 00:35:42.314814 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-09 00:35:42.314824 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-09 00:35:42.314835 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-09 00:35:42.314845 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-09 00:35:42.314856 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-09 00:35:42.314867 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-09 00:35:42.314877 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-09 00:35:42.314888 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-09 00:35:42.314907 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-09 00:35:42.314918 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-09 00:35:42.314929 | orchestrator | 2026-03-09 00:35:42.314939 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-09 00:35:42.314950 | orchestrator | Monday 09 March 2026 00:35:28 +0000 (0:00:02.500) 0:06:57.116 ********** 2026-03-09 00:35:42.314961 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:42.314972 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:42.314982 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:42.314993 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:42.315004 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:42.315014 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:42.315025 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:42.315035 | orchestrator | 2026-03-09 00:35:42.315046 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-09 00:35:42.315057 | orchestrator | Monday 09 March 2026 00:35:28 +0000 (0:00:00.823) 0:06:57.939 ********** 2026-03-09 00:35:42.315070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:35:42.315083 | orchestrator | 2026-03-09 00:35:42.315093 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-09 00:35:42.315104 | orchestrator | Monday 09 March 2026 00:35:29 +0000 (0:00:00.851) 0:06:58.791 ********** 2026-03-09 00:35:42.315115 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:42.315126 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:42.315136 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:42.315147 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:42.315158 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:42.315168 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:42.315179 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:42.315190 | orchestrator | 2026-03-09 00:35:42.315200 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-09 00:35:42.315211 | orchestrator | Monday 09 March 2026 00:35:30 +0000 (0:00:00.863) 0:06:59.654 ********** 2026-03-09 00:35:42.315228 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:42.315239 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:42.315250 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:42.315260 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:42.315271 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:42.315282 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:42.315293 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:42.315303 | orchestrator | 2026-03-09 00:35:42.315314 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-09 00:35:42.315325 | orchestrator | Monday 09 March 2026 00:35:31 +0000 (0:00:01.078) 0:07:00.733 ********** 2026-03-09 00:35:42.315335 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:42.315346 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:42.315357 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:42.315367 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:42.315378 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:42.315389 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:42.315399 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:42.315410 | orchestrator | 2026-03-09 00:35:42.315420 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-09 00:35:42.315431 | orchestrator | Monday 09 March 2026 00:35:32 +0000 (0:00:00.488) 0:07:01.221 ********** 2026-03-09 00:35:42.315442 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:42.315452 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:42.315463 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:42.315474 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:42.315484 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:42.315501 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:42.315512 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:42.315523 | orchestrator | 2026-03-09 00:35:42.315533 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-09 00:35:42.315544 | orchestrator | Monday 09 March 2026 00:35:34 +0000 (0:00:02.241) 0:07:03.462 ********** 2026-03-09 00:35:42.315576 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:42.315587 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:42.315598 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:42.315609 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:42.315619 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:42.315630 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:42.315640 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:42.315651 | orchestrator | 2026-03-09 00:35:42.315662 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-09 00:35:42.315673 | orchestrator | Monday 09 March 2026 00:35:34 +0000 (0:00:00.519) 0:07:03.982 ********** 2026-03-09 00:35:42.315684 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:42.315695 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:35:42.315706 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:35:42.315717 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:35:42.315727 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:35:42.315738 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:35:42.315755 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:15.248283 | orchestrator | 2026-03-09 00:36:15.248415 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-09 00:36:15.248442 | orchestrator | Monday 09 March 2026 00:35:42 +0000 (0:00:07.337) 0:07:11.319 ********** 2026-03-09 00:36:15.248462 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:15.248481 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:15.248501 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:15.248519 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:15.248537 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:15.248628 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:15.248649 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:15.248667 | orchestrator | 2026-03-09 00:36:15.248686 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-09 00:36:15.248705 | orchestrator | Monday 09 March 2026 00:35:43 +0000 (0:00:01.552) 0:07:12.871 ********** 2026-03-09 00:36:15.248723 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:15.248741 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:15.248758 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:15.248775 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:15.248792 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:15.248809 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:15.248827 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:15.248844 | orchestrator | 2026-03-09 00:36:15.248861 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-09 00:36:15.248878 | orchestrator | Monday 09 March 2026 00:35:45 +0000 (0:00:01.771) 0:07:14.643 ********** 2026-03-09 00:36:15.248896 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:15.248913 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:15.248930 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:15.248947 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:15.248965 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:15.248982 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:15.248999 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:15.249016 | orchestrator | 2026-03-09 00:36:15.249033 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-09 00:36:15.249050 | orchestrator | Monday 09 March 2026 00:35:47 +0000 (0:00:01.689) 0:07:16.332 ********** 2026-03-09 00:36:15.249067 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:15.249085 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:15.249103 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:15.249152 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:15.249167 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:15.249182 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:15.249198 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:15.249214 | orchestrator | 2026-03-09 00:36:15.249230 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-09 00:36:15.249247 | orchestrator | Monday 09 March 2026 00:35:48 +0000 (0:00:00.872) 0:07:17.205 ********** 2026-03-09 00:36:15.249263 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:36:15.249279 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:15.249295 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:15.249311 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:15.249326 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:15.249342 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:15.249358 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:15.249374 | orchestrator | 2026-03-09 00:36:15.249390 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-09 00:36:15.249406 | orchestrator | Monday 09 March 2026 00:35:49 +0000 (0:00:01.011) 0:07:18.217 ********** 2026-03-09 00:36:15.249421 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:36:15.249437 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:15.249452 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:15.249468 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:15.249483 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:15.249498 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:15.249514 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:15.249529 | orchestrator | 2026-03-09 00:36:15.249545 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-09 00:36:15.249585 | orchestrator | Monday 09 March 2026 00:35:49 +0000 (0:00:00.536) 0:07:18.754 ********** 2026-03-09 00:36:15.249602 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:15.249637 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:15.249653 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:15.249669 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:15.249684 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:15.249700 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:15.249716 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:15.249731 | orchestrator | 2026-03-09 00:36:15.249746 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-09 00:36:15.249762 | orchestrator | Monday 09 March 2026 00:35:50 +0000 (0:00:00.567) 0:07:19.321 ********** 2026-03-09 00:36:15.249777 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:15.249792 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:15.249808 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:15.249821 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:15.249834 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:15.249846 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:15.249859 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:15.249871 | orchestrator | 2026-03-09 00:36:15.249884 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-09 00:36:15.249896 | orchestrator | Monday 09 March 2026 00:35:50 +0000 (0:00:00.517) 0:07:19.839 ********** 2026-03-09 00:36:15.249909 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:15.249921 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:15.249934 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:15.249947 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:15.249959 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:15.249971 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:15.249984 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:15.249997 | orchestrator | 2026-03-09 00:36:15.250010 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-09 00:36:15.250082 | orchestrator | Monday 09 March 2026 00:35:51 +0000 (0:00:00.760) 0:07:20.600 ********** 2026-03-09 00:36:15.250096 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:15.250109 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:15.250133 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:15.250146 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:15.250159 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:15.250172 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:15.250184 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:15.250197 | orchestrator | 2026-03-09 00:36:15.250235 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-09 00:36:15.250249 | orchestrator | Monday 09 March 2026 00:35:57 +0000 (0:00:05.533) 0:07:26.133 ********** 2026-03-09 00:36:15.250262 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:36:15.250275 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:15.250288 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:15.250302 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:15.250315 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:15.250328 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:15.250340 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:15.250353 | orchestrator | 2026-03-09 00:36:15.250366 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-09 00:36:15.250379 | orchestrator | Monday 09 March 2026 00:35:57 +0000 (0:00:00.539) 0:07:26.673 ********** 2026-03-09 00:36:15.250393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:36:15.250408 | orchestrator | 2026-03-09 00:36:15.250422 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-09 00:36:15.250435 | orchestrator | Monday 09 March 2026 00:35:58 +0000 (0:00:01.087) 0:07:27.760 ********** 2026-03-09 00:36:15.250448 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:15.250461 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:15.250474 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:15.250487 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:15.250500 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:15.250513 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:15.250526 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:15.250540 | orchestrator | 2026-03-09 00:36:15.250589 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-09 00:36:15.250604 | orchestrator | Monday 09 March 2026 00:36:00 +0000 (0:00:01.808) 0:07:29.568 ********** 2026-03-09 00:36:15.250617 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:15.250629 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:15.250642 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:15.250654 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:15.250667 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:15.250679 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:15.250692 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:15.250704 | orchestrator | 2026-03-09 00:36:15.250716 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-09 00:36:15.250729 | orchestrator | Monday 09 March 2026 00:36:02 +0000 (0:00:01.765) 0:07:31.334 ********** 2026-03-09 00:36:15.250741 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:15.250754 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:15.250766 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:15.250779 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:15.250792 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:15.250804 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:15.250816 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:15.250828 | orchestrator | 2026-03-09 00:36:15.250841 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-09 00:36:15.250854 | orchestrator | Monday 09 March 2026 00:36:03 +0000 (0:00:00.877) 0:07:32.212 ********** 2026-03-09 00:36:15.250874 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:36:15.250889 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:36:15.250913 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:36:15.250926 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:36:15.250938 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:36:15.250950 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:36:15.250962 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:36:15.250974 | orchestrator | 2026-03-09 00:36:15.250987 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-09 00:36:15.251000 | orchestrator | Monday 09 March 2026 00:36:05 +0000 (0:00:01.927) 0:07:34.139 ********** 2026-03-09 00:36:15.251014 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:36:15.251027 | orchestrator | 2026-03-09 00:36:15.251040 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-09 00:36:15.251052 | orchestrator | Monday 09 March 2026 00:36:05 +0000 (0:00:00.813) 0:07:34.953 ********** 2026-03-09 00:36:15.251065 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:15.251078 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:15.251090 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:15.251103 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:15.251116 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:15.251128 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:15.251141 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:15.251154 | orchestrator | 2026-03-09 00:36:15.251176 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-09 00:36:45.679844 | orchestrator | Monday 09 March 2026 00:36:15 +0000 (0:00:09.301) 0:07:44.254 ********** 2026-03-09 00:36:45.679957 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:45.679975 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:45.679987 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:45.679998 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:45.680008 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:45.680019 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:45.680030 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:45.680041 | orchestrator | 2026-03-09 00:36:45.680053 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-09 00:36:45.680064 | orchestrator | Monday 09 March 2026 00:36:17 +0000 (0:00:02.070) 0:07:46.325 ********** 2026-03-09 00:36:45.680075 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:45.680086 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:45.680097 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:45.680108 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:45.680118 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:45.680129 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:45.680140 | orchestrator | 2026-03-09 00:36:45.680150 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-09 00:36:45.680161 | orchestrator | Monday 09 March 2026 00:36:18 +0000 (0:00:01.287) 0:07:47.613 ********** 2026-03-09 00:36:45.680172 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:45.680184 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:45.680195 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:45.680206 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:45.680216 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:45.680252 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:45.680264 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:45.680275 | orchestrator | 2026-03-09 00:36:45.680286 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-09 00:36:45.680297 | orchestrator | 2026-03-09 00:36:45.680308 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-09 00:36:45.680318 | orchestrator | Monday 09 March 2026 00:36:19 +0000 (0:00:01.257) 0:07:48.871 ********** 2026-03-09 00:36:45.680329 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:36:45.680340 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:45.680351 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:45.680361 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:45.680372 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:45.680382 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:45.680393 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:45.680403 | orchestrator | 2026-03-09 00:36:45.680416 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-09 00:36:45.680434 | orchestrator | 2026-03-09 00:36:45.680454 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-09 00:36:45.680485 | orchestrator | Monday 09 March 2026 00:36:20 +0000 (0:00:00.780) 0:07:49.652 ********** 2026-03-09 00:36:45.680503 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:45.680519 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:45.680536 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:45.680553 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:45.680599 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:45.680617 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:45.680634 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:45.680652 | orchestrator | 2026-03-09 00:36:45.680670 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-09 00:36:45.680705 | orchestrator | Monday 09 March 2026 00:36:21 +0000 (0:00:01.349) 0:07:51.001 ********** 2026-03-09 00:36:45.680725 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:45.680744 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:45.680764 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:45.680782 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:45.680799 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:45.680811 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:45.680821 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:45.680832 | orchestrator | 2026-03-09 00:36:45.680843 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-09 00:36:45.680854 | orchestrator | Monday 09 March 2026 00:36:23 +0000 (0:00:01.516) 0:07:52.518 ********** 2026-03-09 00:36:45.680865 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:36:45.680876 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:45.680886 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:45.680897 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:45.680908 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:45.680918 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:45.680929 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:45.680939 | orchestrator | 2026-03-09 00:36:45.680950 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-09 00:36:45.680961 | orchestrator | Monday 09 March 2026 00:36:23 +0000 (0:00:00.439) 0:07:52.958 ********** 2026-03-09 00:36:45.680974 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:36:45.680986 | orchestrator | 2026-03-09 00:36:45.680997 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-09 00:36:45.681008 | orchestrator | Monday 09 March 2026 00:36:24 +0000 (0:00:00.870) 0:07:53.828 ********** 2026-03-09 00:36:45.681021 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:36:45.681047 | orchestrator | 2026-03-09 00:36:45.681058 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-09 00:36:45.681069 | orchestrator | Monday 09 March 2026 00:36:25 +0000 (0:00:00.692) 0:07:54.521 ********** 2026-03-09 00:36:45.681080 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:45.681091 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:45.681101 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:45.681112 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:45.681123 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:45.681133 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:45.681144 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:45.681155 | orchestrator | 2026-03-09 00:36:45.681194 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-09 00:36:45.681214 | orchestrator | Monday 09 March 2026 00:36:33 +0000 (0:00:07.954) 0:08:02.475 ********** 2026-03-09 00:36:45.681232 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:45.681251 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:45.681271 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:45.681283 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:45.681294 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:45.681304 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:45.681315 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:45.681325 | orchestrator | 2026-03-09 00:36:45.681336 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-09 00:36:45.681347 | orchestrator | Monday 09 March 2026 00:36:34 +0000 (0:00:01.090) 0:08:03.565 ********** 2026-03-09 00:36:45.681358 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:45.681368 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:45.681379 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:45.681389 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:45.681400 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:45.681410 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:45.681421 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:45.681431 | orchestrator | 2026-03-09 00:36:45.681442 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-09 00:36:45.681453 | orchestrator | Monday 09 March 2026 00:36:35 +0000 (0:00:01.443) 0:08:05.009 ********** 2026-03-09 00:36:45.681464 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:45.681474 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:45.681485 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:45.681496 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:45.681506 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:45.681517 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:45.681527 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:45.681538 | orchestrator | 2026-03-09 00:36:45.681548 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-09 00:36:45.681588 | orchestrator | Monday 09 March 2026 00:36:37 +0000 (0:00:01.993) 0:08:07.002 ********** 2026-03-09 00:36:45.681600 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:45.681610 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:45.681621 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:45.681632 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:45.681642 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:45.681653 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:45.681664 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:45.681674 | orchestrator | 2026-03-09 00:36:45.681685 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-09 00:36:45.681696 | orchestrator | Monday 09 March 2026 00:36:39 +0000 (0:00:01.290) 0:08:08.293 ********** 2026-03-09 00:36:45.681706 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:45.681717 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:45.681749 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:45.681760 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:45.681771 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:45.681782 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:45.681792 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:45.681803 | orchestrator | 2026-03-09 00:36:45.681814 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-09 00:36:45.681825 | orchestrator | 2026-03-09 00:36:45.681842 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-09 00:36:45.681853 | orchestrator | Monday 09 March 2026 00:36:40 +0000 (0:00:01.123) 0:08:09.416 ********** 2026-03-09 00:36:45.681864 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:36:45.681875 | orchestrator | 2026-03-09 00:36:45.681886 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-09 00:36:45.681897 | orchestrator | Monday 09 March 2026 00:36:41 +0000 (0:00:00.856) 0:08:10.273 ********** 2026-03-09 00:36:45.681907 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:45.681918 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:45.681929 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:45.681939 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:45.681950 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:45.681960 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:45.681971 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:45.681981 | orchestrator | 2026-03-09 00:36:45.681992 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-09 00:36:45.682003 | orchestrator | Monday 09 March 2026 00:36:42 +0000 (0:00:01.180) 0:08:11.453 ********** 2026-03-09 00:36:45.682015 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:45.682119 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:45.682138 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:45.682157 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:45.682212 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:45.682230 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:45.682248 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:45.682267 | orchestrator | 2026-03-09 00:36:45.682286 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-09 00:36:45.682305 | orchestrator | Monday 09 March 2026 00:36:43 +0000 (0:00:01.246) 0:08:12.700 ********** 2026-03-09 00:36:45.682323 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:36:45.682343 | orchestrator | 2026-03-09 00:36:45.682363 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-09 00:36:45.682382 | orchestrator | Monday 09 March 2026 00:36:44 +0000 (0:00:01.110) 0:08:13.811 ********** 2026-03-09 00:36:45.682399 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:45.682410 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:45.682421 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:45.682432 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:45.682443 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:45.682453 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:45.682464 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:45.682475 | orchestrator | 2026-03-09 00:36:45.682499 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-09 00:36:47.330133 | orchestrator | Monday 09 March 2026 00:36:45 +0000 (0:00:00.876) 0:08:14.687 ********** 2026-03-09 00:36:47.331082 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:47.331124 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:47.331137 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:47.331148 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:47.331159 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:47.331170 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:47.331181 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:47.331218 | orchestrator | 2026-03-09 00:36:47.331232 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:36:47.331244 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-09 00:36:47.331257 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-09 00:36:47.331268 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-09 00:36:47.331279 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-09 00:36:47.331290 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-09 00:36:47.331317 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-09 00:36:47.331329 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-09 00:36:47.331339 | orchestrator | 2026-03-09 00:36:47.331350 | orchestrator | 2026-03-09 00:36:47.331361 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:36:47.331373 | orchestrator | Monday 09 March 2026 00:36:46 +0000 (0:00:01.130) 0:08:15.818 ********** 2026-03-09 00:36:47.331384 | orchestrator | =============================================================================== 2026-03-09 00:36:47.331395 | orchestrator | osism.commons.packages : Install required packages --------------------- 86.72s 2026-03-09 00:36:47.331406 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.14s 2026-03-09 00:36:47.331416 | orchestrator | osism.commons.packages : Download required packages -------------------- 32.70s 2026-03-09 00:36:47.331427 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.42s 2026-03-09 00:36:47.331438 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 14.23s 2026-03-09 00:36:47.331464 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.22s 2026-03-09 00:36:47.331475 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.07s 2026-03-09 00:36:47.331486 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.33s 2026-03-09 00:36:47.331497 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.31s 2026-03-09 00:36:47.331508 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.30s 2026-03-09 00:36:47.331518 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 7.95s 2026-03-09 00:36:47.331529 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.91s 2026-03-09 00:36:47.331539 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.81s 2026-03-09 00:36:47.331550 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.78s 2026-03-09 00:36:47.331591 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.76s 2026-03-09 00:36:47.331602 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.34s 2026-03-09 00:36:47.331613 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.49s 2026-03-09 00:36:47.331624 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.10s 2026-03-09 00:36:47.331635 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.96s 2026-03-09 00:36:47.331645 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.65s 2026-03-09 00:36:47.653431 | orchestrator | + osism apply fail2ban 2026-03-09 00:37:00.384519 | orchestrator | 2026-03-09 00:37:00 | INFO  | Task 1e2486fd-e8af-4697-81e4-415b10c7fe06 (fail2ban) was prepared for execution. 2026-03-09 00:37:00.384665 | orchestrator | 2026-03-09 00:37:00 | INFO  | It takes a moment until task 1e2486fd-e8af-4697-81e4-415b10c7fe06 (fail2ban) has been started and output is visible here. 2026-03-09 00:37:23.347815 | orchestrator | 2026-03-09 00:37:23.347917 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-09 00:37:23.347932 | orchestrator | 2026-03-09 00:37:23.347942 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-09 00:37:23.347953 | orchestrator | Monday 09 March 2026 00:37:05 +0000 (0:00:00.284) 0:00:00.284 ********** 2026-03-09 00:37:23.347964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:37:23.347976 | orchestrator | 2026-03-09 00:37:23.347985 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-09 00:37:23.347994 | orchestrator | Monday 09 March 2026 00:37:06 +0000 (0:00:01.201) 0:00:01.486 ********** 2026-03-09 00:37:23.348003 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:37:23.348012 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:37:23.348020 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:37:23.348029 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:37:23.348036 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:37:23.348045 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:37:23.348054 | orchestrator | changed: [testbed-manager] 2026-03-09 00:37:23.348062 | orchestrator | 2026-03-09 00:37:23.348071 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-09 00:37:23.348080 | orchestrator | Monday 09 March 2026 00:37:18 +0000 (0:00:11.781) 0:00:13.267 ********** 2026-03-09 00:37:23.348089 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:37:23.348097 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:37:23.348103 | orchestrator | changed: [testbed-manager] 2026-03-09 00:37:23.348108 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:37:23.348113 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:37:23.348118 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:37:23.348123 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:37:23.348128 | orchestrator | 2026-03-09 00:37:23.348134 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-09 00:37:23.348139 | orchestrator | Monday 09 March 2026 00:37:19 +0000 (0:00:01.595) 0:00:14.863 ********** 2026-03-09 00:37:23.348144 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:37:23.348150 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:37:23.348155 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:37:23.348160 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:37:23.348167 | orchestrator | ok: [testbed-manager] 2026-03-09 00:37:23.348176 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:37:23.348184 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:37:23.348191 | orchestrator | 2026-03-09 00:37:23.348199 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-09 00:37:23.348207 | orchestrator | Monday 09 March 2026 00:37:21 +0000 (0:00:01.575) 0:00:16.438 ********** 2026-03-09 00:37:23.348216 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:37:23.348225 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:37:23.348233 | orchestrator | changed: [testbed-manager] 2026-03-09 00:37:23.348242 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:37:23.348251 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:37:23.348259 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:37:23.348268 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:37:23.348273 | orchestrator | 2026-03-09 00:37:23.348278 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:37:23.348284 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:37:23.348313 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:37:23.348319 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:37:23.348324 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:37:23.348328 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:37:23.348333 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:37:23.348338 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:37:23.348343 | orchestrator | 2026-03-09 00:37:23.348348 | orchestrator | 2026-03-09 00:37:23.348352 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:37:23.348357 | orchestrator | Monday 09 March 2026 00:37:22 +0000 (0:00:01.665) 0:00:18.104 ********** 2026-03-09 00:37:23.348363 | orchestrator | =============================================================================== 2026-03-09 00:37:23.348369 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.78s 2026-03-09 00:37:23.348374 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.67s 2026-03-09 00:37:23.348380 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.60s 2026-03-09 00:37:23.348386 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.58s 2026-03-09 00:37:23.348391 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.20s 2026-03-09 00:37:23.707495 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-09 00:37:23.707628 | orchestrator | + osism apply network 2026-03-09 00:37:35.851804 | orchestrator | 2026-03-09 00:37:35 | INFO  | Task 7c1695ac-fa60-403a-90f0-e349a28a3675 (network) was prepared for execution. 2026-03-09 00:37:35.851938 | orchestrator | 2026-03-09 00:37:35 | INFO  | It takes a moment until task 7c1695ac-fa60-403a-90f0-e349a28a3675 (network) has been started and output is visible here. 2026-03-09 00:38:05.093404 | orchestrator | 2026-03-09 00:38:05.093517 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-09 00:38:05.093535 | orchestrator | 2026-03-09 00:38:05.093611 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-09 00:38:05.093626 | orchestrator | Monday 09 March 2026 00:37:40 +0000 (0:00:00.294) 0:00:00.294 ********** 2026-03-09 00:38:05.093638 | orchestrator | ok: [testbed-manager] 2026-03-09 00:38:05.093650 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:38:05.093661 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:38:05.093672 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:38:05.093683 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:38:05.093693 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:38:05.093704 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:38:05.093715 | orchestrator | 2026-03-09 00:38:05.093726 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-09 00:38:05.093737 | orchestrator | Monday 09 March 2026 00:37:40 +0000 (0:00:00.744) 0:00:01.038 ********** 2026-03-09 00:38:05.093750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:38:05.093764 | orchestrator | 2026-03-09 00:38:05.093776 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-09 00:38:05.093812 | orchestrator | Monday 09 March 2026 00:37:42 +0000 (0:00:01.239) 0:00:02.278 ********** 2026-03-09 00:38:05.093824 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:38:05.093835 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:38:05.093845 | orchestrator | ok: [testbed-manager] 2026-03-09 00:38:05.093881 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:38:05.093892 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:38:05.093903 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:38:05.093913 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:38:05.093924 | orchestrator | 2026-03-09 00:38:05.093938 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-09 00:38:05.093951 | orchestrator | Monday 09 March 2026 00:37:44 +0000 (0:00:02.019) 0:00:04.298 ********** 2026-03-09 00:38:05.093964 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:38:05.093976 | orchestrator | ok: [testbed-manager] 2026-03-09 00:38:05.093990 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:38:05.094002 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:38:05.094083 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:38:05.094106 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:38:05.094125 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:38:05.094145 | orchestrator | 2026-03-09 00:38:05.094166 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-09 00:38:05.094184 | orchestrator | Monday 09 March 2026 00:37:45 +0000 (0:00:01.820) 0:00:06.118 ********** 2026-03-09 00:38:05.094202 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-09 00:38:05.094214 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-09 00:38:05.094224 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-09 00:38:05.094235 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-09 00:38:05.094246 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-09 00:38:05.094256 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-09 00:38:05.094267 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-09 00:38:05.094278 | orchestrator | 2026-03-09 00:38:05.094306 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-09 00:38:05.094322 | orchestrator | Monday 09 March 2026 00:37:47 +0000 (0:00:01.047) 0:00:07.165 ********** 2026-03-09 00:38:05.094334 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-09 00:38:05.094345 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 00:38:05.094356 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:38:05.094366 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 00:38:05.094377 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-09 00:38:05.094388 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 00:38:05.094398 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 00:38:05.094409 | orchestrator | 2026-03-09 00:38:05.094419 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-09 00:38:05.094430 | orchestrator | Monday 09 March 2026 00:37:50 +0000 (0:00:03.324) 0:00:10.490 ********** 2026-03-09 00:38:05.094441 | orchestrator | changed: [testbed-manager] 2026-03-09 00:38:05.094451 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:38:05.094462 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:38:05.094472 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:38:05.094483 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:38:05.094494 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:38:05.094504 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:38:05.094515 | orchestrator | 2026-03-09 00:38:05.094526 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-09 00:38:05.094536 | orchestrator | Monday 09 March 2026 00:37:52 +0000 (0:00:01.672) 0:00:12.162 ********** 2026-03-09 00:38:05.094575 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:38:05.094592 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 00:38:05.094603 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-09 00:38:05.094614 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-09 00:38:05.094636 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 00:38:05.094646 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 00:38:05.094657 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 00:38:05.094668 | orchestrator | 2026-03-09 00:38:05.094679 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-09 00:38:05.094689 | orchestrator | Monday 09 March 2026 00:37:53 +0000 (0:00:01.770) 0:00:13.932 ********** 2026-03-09 00:38:05.094700 | orchestrator | ok: [testbed-manager] 2026-03-09 00:38:05.094711 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:38:05.094721 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:38:05.094732 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:38:05.094743 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:38:05.094754 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:38:05.094765 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:38:05.094775 | orchestrator | 2026-03-09 00:38:05.094786 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-09 00:38:05.094818 | orchestrator | Monday 09 March 2026 00:37:55 +0000 (0:00:01.226) 0:00:15.159 ********** 2026-03-09 00:38:05.094830 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:38:05.094840 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:38:05.094851 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:38:05.094862 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:38:05.094873 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:38:05.094883 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:38:05.094894 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:38:05.094905 | orchestrator | 2026-03-09 00:38:05.094916 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-09 00:38:05.094926 | orchestrator | Monday 09 March 2026 00:37:55 +0000 (0:00:00.677) 0:00:15.836 ********** 2026-03-09 00:38:05.094937 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:38:05.094948 | orchestrator | ok: [testbed-manager] 2026-03-09 00:38:05.094959 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:38:05.094969 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:38:05.094980 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:38:05.094991 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:38:05.095001 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:38:05.095012 | orchestrator | 2026-03-09 00:38:05.095023 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-09 00:38:05.095034 | orchestrator | Monday 09 March 2026 00:37:57 +0000 (0:00:02.226) 0:00:18.063 ********** 2026-03-09 00:38:05.095045 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:38:05.095056 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:38:05.095066 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:38:05.095077 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:38:05.095088 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:38:05.095098 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:38:05.095110 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-09 00:38:05.095123 | orchestrator | 2026-03-09 00:38:05.095134 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-09 00:38:05.095144 | orchestrator | Monday 09 March 2026 00:37:58 +0000 (0:00:00.966) 0:00:19.030 ********** 2026-03-09 00:38:05.095155 | orchestrator | ok: [testbed-manager] 2026-03-09 00:38:05.095166 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:38:05.095177 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:38:05.095187 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:38:05.095198 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:38:05.095209 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:38:05.095220 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:38:05.095230 | orchestrator | 2026-03-09 00:38:05.095241 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-09 00:38:05.095252 | orchestrator | Monday 09 March 2026 00:38:00 +0000 (0:00:01.659) 0:00:20.689 ********** 2026-03-09 00:38:05.095263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:38:05.095282 | orchestrator | 2026-03-09 00:38:05.095293 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-09 00:38:05.095304 | orchestrator | Monday 09 March 2026 00:38:01 +0000 (0:00:01.297) 0:00:21.987 ********** 2026-03-09 00:38:05.095315 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:38:05.095326 | orchestrator | ok: [testbed-manager] 2026-03-09 00:38:05.095336 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:38:05.095347 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:38:05.095363 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:38:05.095374 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:38:05.095384 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:38:05.095395 | orchestrator | 2026-03-09 00:38:05.095406 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-09 00:38:05.095417 | orchestrator | Monday 09 March 2026 00:38:02 +0000 (0:00:00.980) 0:00:22.967 ********** 2026-03-09 00:38:05.095428 | orchestrator | ok: [testbed-manager] 2026-03-09 00:38:05.095438 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:38:05.095449 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:38:05.095460 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:38:05.095470 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:38:05.095481 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:38:05.095491 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:38:05.095502 | orchestrator | 2026-03-09 00:38:05.095513 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-09 00:38:05.095524 | orchestrator | Monday 09 March 2026 00:38:03 +0000 (0:00:00.912) 0:00:23.880 ********** 2026-03-09 00:38:05.095534 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:38:05.095546 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:38:05.095583 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:38:05.095601 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:38:05.095622 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:38:05.095640 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:38:05.095658 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:38:05.095670 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:38:05.095681 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:38:05.095691 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:38:05.095702 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:38:05.095713 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:38:05.095723 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:38:05.095738 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:38:05.095756 | orchestrator | 2026-03-09 00:38:05.095784 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-09 00:38:22.338905 | orchestrator | Monday 09 March 2026 00:38:05 +0000 (0:00:01.342) 0:00:25.223 ********** 2026-03-09 00:38:22.339010 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:38:22.339028 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:38:22.339041 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:38:22.339052 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:38:22.339063 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:38:22.339074 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:38:22.339085 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:38:22.339097 | orchestrator | 2026-03-09 00:38:22.339134 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-09 00:38:22.339146 | orchestrator | Monday 09 March 2026 00:38:05 +0000 (0:00:00.668) 0:00:25.892 ********** 2026-03-09 00:38:22.339160 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-4, testbed-node-3, testbed-node-2, testbed-node-5 2026-03-09 00:38:22.339172 | orchestrator | 2026-03-09 00:38:22.339184 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-09 00:38:22.339194 | orchestrator | Monday 09 March 2026 00:38:10 +0000 (0:00:04.677) 0:00:30.569 ********** 2026-03-09 00:38:22.339207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:38:22.339219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:38:22.339231 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:38:22.339244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:38:22.339255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:38:22.339280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:38:22.339292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:38:22.339303 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:38:22.339314 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:38:22.339347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:38:22.339359 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:38:22.339388 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:38:22.339409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:38:22.339420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:38:22.339431 | orchestrator | 2026-03-09 00:38:22.339443 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-09 00:38:22.339456 | orchestrator | Monday 09 March 2026 00:38:16 +0000 (0:00:06.192) 0:00:36.761 ********** 2026-03-09 00:38:22.339469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:38:22.339483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:38:22.339496 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:38:22.339510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:38:22.339523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:38:22.339580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:38:22.339596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:38:22.339609 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:38:22.339623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:38:22.339635 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:38:22.339649 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:38:22.339671 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:38:22.339698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:38:28.693046 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:38:28.693156 | orchestrator | 2026-03-09 00:38:28.693174 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-09 00:38:28.693188 | orchestrator | Monday 09 March 2026 00:38:22 +0000 (0:00:05.699) 0:00:42.461 ********** 2026-03-09 00:38:28.693202 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:38:28.693214 | orchestrator | 2026-03-09 00:38:28.693226 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-09 00:38:28.693237 | orchestrator | Monday 09 March 2026 00:38:23 +0000 (0:00:01.288) 0:00:43.749 ********** 2026-03-09 00:38:28.693248 | orchestrator | ok: [testbed-manager] 2026-03-09 00:38:28.693260 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:38:28.693270 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:38:28.693281 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:38:28.693292 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:38:28.693302 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:38:28.693313 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:38:28.693324 | orchestrator | 2026-03-09 00:38:28.693335 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-09 00:38:28.693346 | orchestrator | Monday 09 March 2026 00:38:24 +0000 (0:00:01.206) 0:00:44.956 ********** 2026-03-09 00:38:28.693356 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:38:28.693368 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:38:28.693379 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:38:28.693390 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:38:28.693400 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:38:28.693413 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:38:28.693424 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:38:28.693435 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:38:28.693446 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:38:28.693483 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:38:28.693494 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:38:28.693521 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:38:28.693534 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:38:28.693623 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:38:28.693662 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:38:28.693675 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:38:28.693689 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:38:28.693701 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:38:28.693713 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:38:28.693725 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:38:28.693738 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:38:28.693750 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:38:28.693762 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:38:28.693774 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:38:28.693786 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:38:28.693798 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:38:28.693810 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:38:28.693823 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:38:28.693836 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:38:28.693848 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:38:28.693860 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:38:28.693872 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:38:28.693884 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:38:28.693894 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:38:28.693905 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:38:28.693915 | orchestrator | 2026-03-09 00:38:28.693926 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-09 00:38:28.693985 | orchestrator | Monday 09 March 2026 00:38:26 +0000 (0:00:02.068) 0:00:47.025 ********** 2026-03-09 00:38:28.693999 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:38:28.694010 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:38:28.694084 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:38:28.694095 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:38:28.694106 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:38:28.694117 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:38:28.694128 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:38:28.694138 | orchestrator | 2026-03-09 00:38:28.694149 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-09 00:38:28.694160 | orchestrator | Monday 09 March 2026 00:38:27 +0000 (0:00:00.631) 0:00:47.656 ********** 2026-03-09 00:38:28.694170 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:38:28.694181 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:38:28.694228 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:38:28.694240 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:38:28.694251 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:38:28.694262 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:38:28.694273 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:38:28.694283 | orchestrator | 2026-03-09 00:38:28.694294 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:38:28.694306 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 00:38:28.694319 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:38:28.694341 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:38:28.694352 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:38:28.694363 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:38:28.694374 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:38:28.694385 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 00:38:28.694395 | orchestrator | 2026-03-09 00:38:28.694408 | orchestrator | 2026-03-09 00:38:28.694426 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:38:28.694445 | orchestrator | Monday 09 March 2026 00:38:28 +0000 (0:00:00.754) 0:00:48.410 ********** 2026-03-09 00:38:28.694473 | orchestrator | =============================================================================== 2026-03-09 00:38:28.694493 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.19s 2026-03-09 00:38:28.694504 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.70s 2026-03-09 00:38:28.694515 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.68s 2026-03-09 00:38:28.694577 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.32s 2026-03-09 00:38:28.694596 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.23s 2026-03-09 00:38:28.694606 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.07s 2026-03-09 00:38:28.694622 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.02s 2026-03-09 00:38:28.694642 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.82s 2026-03-09 00:38:28.694660 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.77s 2026-03-09 00:38:28.694672 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.67s 2026-03-09 00:38:28.694682 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.66s 2026-03-09 00:38:28.694693 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.34s 2026-03-09 00:38:28.694704 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.30s 2026-03-09 00:38:28.694714 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.29s 2026-03-09 00:38:28.694725 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.24s 2026-03-09 00:38:28.694735 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.23s 2026-03-09 00:38:28.694746 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2026-03-09 00:38:28.694760 | orchestrator | osism.commons.network : Create required directories --------------------- 1.05s 2026-03-09 00:38:28.694780 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.98s 2026-03-09 00:38:28.694796 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.97s 2026-03-09 00:38:29.002160 | orchestrator | + osism apply wireguard 2026-03-09 00:38:41.039059 | orchestrator | 2026-03-09 00:38:41 | INFO  | Task 182e5626-23d2-4d4d-aa7c-54305e8548ed (wireguard) was prepared for execution. 2026-03-09 00:38:41.039174 | orchestrator | 2026-03-09 00:38:41 | INFO  | It takes a moment until task 182e5626-23d2-4d4d-aa7c-54305e8548ed (wireguard) has been started and output is visible here. 2026-03-09 00:39:01.879655 | orchestrator | 2026-03-09 00:39:01.879766 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-09 00:39:01.879810 | orchestrator | 2026-03-09 00:39:01.879824 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-09 00:39:01.879835 | orchestrator | Monday 09 March 2026 00:38:45 +0000 (0:00:00.223) 0:00:00.223 ********** 2026-03-09 00:39:01.879846 | orchestrator | ok: [testbed-manager] 2026-03-09 00:39:01.879859 | orchestrator | 2026-03-09 00:39:01.879870 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-09 00:39:01.879881 | orchestrator | Monday 09 March 2026 00:38:47 +0000 (0:00:01.795) 0:00:02.018 ********** 2026-03-09 00:39:01.879892 | orchestrator | changed: [testbed-manager] 2026-03-09 00:39:01.879909 | orchestrator | 2026-03-09 00:39:01.879921 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-09 00:39:01.879932 | orchestrator | Monday 09 March 2026 00:38:54 +0000 (0:00:06.967) 0:00:08.986 ********** 2026-03-09 00:39:01.879943 | orchestrator | changed: [testbed-manager] 2026-03-09 00:39:01.879954 | orchestrator | 2026-03-09 00:39:01.879965 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-09 00:39:01.879976 | orchestrator | Monday 09 March 2026 00:38:54 +0000 (0:00:00.590) 0:00:09.577 ********** 2026-03-09 00:39:01.879987 | orchestrator | changed: [testbed-manager] 2026-03-09 00:39:01.879997 | orchestrator | 2026-03-09 00:39:01.880008 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-09 00:39:01.880019 | orchestrator | Monday 09 March 2026 00:38:55 +0000 (0:00:00.441) 0:00:10.019 ********** 2026-03-09 00:39:01.880030 | orchestrator | ok: [testbed-manager] 2026-03-09 00:39:01.880041 | orchestrator | 2026-03-09 00:39:01.880051 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-09 00:39:01.880062 | orchestrator | Monday 09 March 2026 00:38:55 +0000 (0:00:00.696) 0:00:10.715 ********** 2026-03-09 00:39:01.880073 | orchestrator | ok: [testbed-manager] 2026-03-09 00:39:01.880084 | orchestrator | 2026-03-09 00:39:01.880095 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-09 00:39:01.880106 | orchestrator | Monday 09 March 2026 00:38:56 +0000 (0:00:00.431) 0:00:11.147 ********** 2026-03-09 00:39:01.880117 | orchestrator | ok: [testbed-manager] 2026-03-09 00:39:01.880130 | orchestrator | 2026-03-09 00:39:01.880143 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-09 00:39:01.880156 | orchestrator | Monday 09 March 2026 00:38:56 +0000 (0:00:00.432) 0:00:11.580 ********** 2026-03-09 00:39:01.880168 | orchestrator | changed: [testbed-manager] 2026-03-09 00:39:01.880181 | orchestrator | 2026-03-09 00:39:01.880194 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-09 00:39:01.880206 | orchestrator | Monday 09 March 2026 00:38:57 +0000 (0:00:01.196) 0:00:12.776 ********** 2026-03-09 00:39:01.880219 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:39:01.880232 | orchestrator | changed: [testbed-manager] 2026-03-09 00:39:01.880245 | orchestrator | 2026-03-09 00:39:01.880258 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-09 00:39:01.880274 | orchestrator | Monday 09 March 2026 00:38:58 +0000 (0:00:00.965) 0:00:13.742 ********** 2026-03-09 00:39:01.880293 | orchestrator | changed: [testbed-manager] 2026-03-09 00:39:01.880311 | orchestrator | 2026-03-09 00:39:01.880329 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-09 00:39:01.880349 | orchestrator | Monday 09 March 2026 00:39:00 +0000 (0:00:01.689) 0:00:15.431 ********** 2026-03-09 00:39:01.880370 | orchestrator | changed: [testbed-manager] 2026-03-09 00:39:01.880384 | orchestrator | 2026-03-09 00:39:01.880397 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:39:01.880410 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:39:01.880424 | orchestrator | 2026-03-09 00:39:01.880438 | orchestrator | 2026-03-09 00:39:01.880449 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:39:01.880469 | orchestrator | Monday 09 March 2026 00:39:01 +0000 (0:00:00.930) 0:00:16.361 ********** 2026-03-09 00:39:01.880480 | orchestrator | =============================================================================== 2026-03-09 00:39:01.880491 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.97s 2026-03-09 00:39:01.880502 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.80s 2026-03-09 00:39:01.880513 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2026-03-09 00:39:01.880524 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.20s 2026-03-09 00:39:01.880594 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2026-03-09 00:39:01.880610 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.93s 2026-03-09 00:39:01.880621 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.70s 2026-03-09 00:39:01.880632 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.59s 2026-03-09 00:39:01.880643 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2026-03-09 00:39:01.880653 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2026-03-09 00:39:01.880664 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2026-03-09 00:39:02.196687 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-09 00:39:02.230991 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-09 00:39:02.231058 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-09 00:39:02.310703 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 175 0 --:--:-- --:--:-- --:--:-- 177 2026-03-09 00:39:02.325644 | orchestrator | + osism apply --environment custom workarounds 2026-03-09 00:39:04.282763 | orchestrator | 2026-03-09 00:39:04 | INFO  | Trying to run play workarounds in environment custom 2026-03-09 00:39:14.442483 | orchestrator | 2026-03-09 00:39:14 | INFO  | Task d08b41b4-bd37-4996-8168-8f5e989c8d0b (workarounds) was prepared for execution. 2026-03-09 00:39:14.442672 | orchestrator | 2026-03-09 00:39:14 | INFO  | It takes a moment until task d08b41b4-bd37-4996-8168-8f5e989c8d0b (workarounds) has been started and output is visible here. 2026-03-09 00:39:40.110305 | orchestrator | 2026-03-09 00:39:40.110420 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:39:40.110438 | orchestrator | 2026-03-09 00:39:40.110449 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-09 00:39:40.110461 | orchestrator | Monday 09 March 2026 00:39:18 +0000 (0:00:00.130) 0:00:00.130 ********** 2026-03-09 00:39:40.110473 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-09 00:39:40.110485 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-09 00:39:40.110495 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-09 00:39:40.110506 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-09 00:39:40.110517 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-09 00:39:40.110571 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-09 00:39:40.110586 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-09 00:39:40.110597 | orchestrator | 2026-03-09 00:39:40.110609 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-09 00:39:40.110620 | orchestrator | 2026-03-09 00:39:40.110630 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-09 00:39:40.110642 | orchestrator | Monday 09 March 2026 00:39:19 +0000 (0:00:00.818) 0:00:00.949 ********** 2026-03-09 00:39:40.110653 | orchestrator | ok: [testbed-manager] 2026-03-09 00:39:40.110692 | orchestrator | 2026-03-09 00:39:40.110704 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-09 00:39:40.110715 | orchestrator | 2026-03-09 00:39:40.110727 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-09 00:39:40.110738 | orchestrator | Monday 09 March 2026 00:39:22 +0000 (0:00:02.490) 0:00:03.440 ********** 2026-03-09 00:39:40.110749 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:39:40.110760 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:39:40.110771 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:39:40.110782 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:39:40.110793 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:39:40.110803 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:39:40.110814 | orchestrator | 2026-03-09 00:39:40.110825 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-09 00:39:40.110838 | orchestrator | 2026-03-09 00:39:40.110851 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-09 00:39:40.110878 | orchestrator | Monday 09 March 2026 00:39:23 +0000 (0:00:01.813) 0:00:05.254 ********** 2026-03-09 00:39:40.110893 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:39:40.110907 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:39:40.110920 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:39:40.110932 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:39:40.110945 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:39:40.110958 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:39:40.110971 | orchestrator | 2026-03-09 00:39:40.110984 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-09 00:39:40.110997 | orchestrator | Monday 09 March 2026 00:39:25 +0000 (0:00:01.451) 0:00:06.705 ********** 2026-03-09 00:39:40.111011 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:39:40.111024 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:39:40.111034 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:39:40.111045 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:39:40.111056 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:39:40.111067 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:39:40.111078 | orchestrator | 2026-03-09 00:39:40.111089 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-09 00:39:40.111100 | orchestrator | Monday 09 March 2026 00:39:29 +0000 (0:00:03.797) 0:00:10.503 ********** 2026-03-09 00:39:40.111111 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:39:40.111122 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:39:40.111133 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:39:40.111144 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:39:40.111155 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:39:40.111165 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:39:40.111176 | orchestrator | 2026-03-09 00:39:40.111187 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-09 00:39:40.111198 | orchestrator | 2026-03-09 00:39:40.111209 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-09 00:39:40.111220 | orchestrator | Monday 09 March 2026 00:39:29 +0000 (0:00:00.734) 0:00:11.238 ********** 2026-03-09 00:39:40.111231 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:39:40.111242 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:39:40.111253 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:39:40.111264 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:39:40.111274 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:39:40.111285 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:39:40.111303 | orchestrator | changed: [testbed-manager] 2026-03-09 00:39:40.111315 | orchestrator | 2026-03-09 00:39:40.111326 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-09 00:39:40.111337 | orchestrator | Monday 09 March 2026 00:39:31 +0000 (0:00:01.646) 0:00:12.884 ********** 2026-03-09 00:39:40.111347 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:39:40.111358 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:39:40.111369 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:39:40.111380 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:39:40.111391 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:39:40.111402 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:39:40.111431 | orchestrator | changed: [testbed-manager] 2026-03-09 00:39:40.111443 | orchestrator | 2026-03-09 00:39:40.111455 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-09 00:39:40.111466 | orchestrator | Monday 09 March 2026 00:39:33 +0000 (0:00:01.581) 0:00:14.466 ********** 2026-03-09 00:39:40.111477 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:39:40.111488 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:39:40.111499 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:39:40.111510 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:39:40.111520 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:39:40.111559 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:39:40.111577 | orchestrator | ok: [testbed-manager] 2026-03-09 00:39:40.111589 | orchestrator | 2026-03-09 00:39:40.111600 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-09 00:39:40.111611 | orchestrator | Monday 09 March 2026 00:39:34 +0000 (0:00:01.581) 0:00:16.047 ********** 2026-03-09 00:39:40.111622 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:39:40.111633 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:39:40.111643 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:39:40.111654 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:39:40.111665 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:39:40.111676 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:39:40.111686 | orchestrator | changed: [testbed-manager] 2026-03-09 00:39:40.111697 | orchestrator | 2026-03-09 00:39:40.111708 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-09 00:39:40.111718 | orchestrator | Monday 09 March 2026 00:39:36 +0000 (0:00:01.846) 0:00:17.894 ********** 2026-03-09 00:39:40.111729 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:39:40.111740 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:39:40.111751 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:39:40.111761 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:39:40.111772 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:39:40.111783 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:39:40.111793 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:39:40.111804 | orchestrator | 2026-03-09 00:39:40.111815 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-09 00:39:40.111826 | orchestrator | 2026-03-09 00:39:40.111837 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-09 00:39:40.111847 | orchestrator | Monday 09 March 2026 00:39:37 +0000 (0:00:00.638) 0:00:18.533 ********** 2026-03-09 00:39:40.111858 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:39:40.111869 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:39:40.111879 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:39:40.111890 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:39:40.111901 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:39:40.111917 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:39:40.111929 | orchestrator | ok: [testbed-manager] 2026-03-09 00:39:40.111939 | orchestrator | 2026-03-09 00:39:40.111950 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:39:40.111962 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:39:40.111974 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:39:40.111992 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:39:40.112004 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:39:40.112015 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:39:40.112026 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:39:40.112036 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:39:40.112047 | orchestrator | 2026-03-09 00:39:40.112058 | orchestrator | 2026-03-09 00:39:40.112069 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:39:40.112080 | orchestrator | Monday 09 March 2026 00:39:40 +0000 (0:00:02.970) 0:00:21.503 ********** 2026-03-09 00:39:40.112090 | orchestrator | =============================================================================== 2026-03-09 00:39:40.112101 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.80s 2026-03-09 00:39:40.112112 | orchestrator | Install python3-docker -------------------------------------------------- 2.97s 2026-03-09 00:39:40.112123 | orchestrator | Apply netplan configuration --------------------------------------------- 2.49s 2026-03-09 00:39:40.112134 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.85s 2026-03-09 00:39:40.112144 | orchestrator | Apply netplan configuration --------------------------------------------- 1.81s 2026-03-09 00:39:40.112155 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.65s 2026-03-09 00:39:40.112166 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.58s 2026-03-09 00:39:40.112176 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.58s 2026-03-09 00:39:40.112187 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.45s 2026-03-09 00:39:40.112198 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.82s 2026-03-09 00:39:40.112209 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.73s 2026-03-09 00:39:40.112227 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2026-03-09 00:39:40.826527 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-09 00:39:52.921011 | orchestrator | 2026-03-09 00:39:52 | INFO  | Task 569e3356-83e7-450e-b4bb-1e99fd983872 (reboot) was prepared for execution. 2026-03-09 00:39:52.921181 | orchestrator | 2026-03-09 00:39:52 | INFO  | It takes a moment until task 569e3356-83e7-450e-b4bb-1e99fd983872 (reboot) has been started and output is visible here. 2026-03-09 00:40:03.061039 | orchestrator | 2026-03-09 00:40:03.061150 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:40:03.061165 | orchestrator | 2026-03-09 00:40:03.061179 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:40:03.061193 | orchestrator | Monday 09 March 2026 00:39:57 +0000 (0:00:00.204) 0:00:00.204 ********** 2026-03-09 00:40:03.061206 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:40:03.061221 | orchestrator | 2026-03-09 00:40:03.061233 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:40:03.061247 | orchestrator | Monday 09 March 2026 00:39:57 +0000 (0:00:00.118) 0:00:00.322 ********** 2026-03-09 00:40:03.061260 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:40:03.061273 | orchestrator | 2026-03-09 00:40:03.061286 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:40:03.061325 | orchestrator | Monday 09 March 2026 00:39:58 +0000 (0:00:00.894) 0:00:01.216 ********** 2026-03-09 00:40:03.061340 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:40:03.061354 | orchestrator | 2026-03-09 00:40:03.061368 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:40:03.061381 | orchestrator | 2026-03-09 00:40:03.061395 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:40:03.061408 | orchestrator | Monday 09 March 2026 00:39:58 +0000 (0:00:00.125) 0:00:01.342 ********** 2026-03-09 00:40:03.061421 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:40:03.061434 | orchestrator | 2026-03-09 00:40:03.061447 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:40:03.061460 | orchestrator | Monday 09 March 2026 00:39:58 +0000 (0:00:00.123) 0:00:01.466 ********** 2026-03-09 00:40:03.061473 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:40:03.061487 | orchestrator | 2026-03-09 00:40:03.061500 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:40:03.061571 | orchestrator | Monday 09 March 2026 00:39:59 +0000 (0:00:00.651) 0:00:02.117 ********** 2026-03-09 00:40:03.061588 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:40:03.061602 | orchestrator | 2026-03-09 00:40:03.061615 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:40:03.061628 | orchestrator | 2026-03-09 00:40:03.061641 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:40:03.061654 | orchestrator | Monday 09 March 2026 00:39:59 +0000 (0:00:00.112) 0:00:02.230 ********** 2026-03-09 00:40:03.061668 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:40:03.061681 | orchestrator | 2026-03-09 00:40:03.061694 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:40:03.061728 | orchestrator | Monday 09 March 2026 00:39:59 +0000 (0:00:00.211) 0:00:02.442 ********** 2026-03-09 00:40:03.061753 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:40:03.061769 | orchestrator | 2026-03-09 00:40:03.061783 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:40:03.061798 | orchestrator | Monday 09 March 2026 00:40:00 +0000 (0:00:00.661) 0:00:03.103 ********** 2026-03-09 00:40:03.061812 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:40:03.061826 | orchestrator | 2026-03-09 00:40:03.061839 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:40:03.061851 | orchestrator | 2026-03-09 00:40:03.061865 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:40:03.061879 | orchestrator | Monday 09 March 2026 00:40:00 +0000 (0:00:00.114) 0:00:03.217 ********** 2026-03-09 00:40:03.061892 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:40:03.061905 | orchestrator | 2026-03-09 00:40:03.061919 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:40:03.061933 | orchestrator | Monday 09 March 2026 00:40:00 +0000 (0:00:00.117) 0:00:03.335 ********** 2026-03-09 00:40:03.061947 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:40:03.061960 | orchestrator | 2026-03-09 00:40:03.061974 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:40:03.061987 | orchestrator | Monday 09 March 2026 00:40:00 +0000 (0:00:00.643) 0:00:03.979 ********** 2026-03-09 00:40:03.062001 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:40:03.062077 | orchestrator | 2026-03-09 00:40:03.062094 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:40:03.062108 | orchestrator | 2026-03-09 00:40:03.062122 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:40:03.062135 | orchestrator | Monday 09 March 2026 00:40:01 +0000 (0:00:00.113) 0:00:04.092 ********** 2026-03-09 00:40:03.062148 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:40:03.062163 | orchestrator | 2026-03-09 00:40:03.062177 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:40:03.062205 | orchestrator | Monday 09 March 2026 00:40:01 +0000 (0:00:00.129) 0:00:04.222 ********** 2026-03-09 00:40:03.062220 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:40:03.062234 | orchestrator | 2026-03-09 00:40:03.062247 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:40:03.062261 | orchestrator | Monday 09 March 2026 00:40:01 +0000 (0:00:00.653) 0:00:04.876 ********** 2026-03-09 00:40:03.062275 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:40:03.062289 | orchestrator | 2026-03-09 00:40:03.062302 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:40:03.062315 | orchestrator | 2026-03-09 00:40:03.062327 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:40:03.062341 | orchestrator | Monday 09 March 2026 00:40:01 +0000 (0:00:00.127) 0:00:05.003 ********** 2026-03-09 00:40:03.062353 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:40:03.062366 | orchestrator | 2026-03-09 00:40:03.062379 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:40:03.062392 | orchestrator | Monday 09 March 2026 00:40:02 +0000 (0:00:00.101) 0:00:05.105 ********** 2026-03-09 00:40:03.062400 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:40:03.062408 | orchestrator | 2026-03-09 00:40:03.062416 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:40:03.062424 | orchestrator | Monday 09 March 2026 00:40:02 +0000 (0:00:00.653) 0:00:05.758 ********** 2026-03-09 00:40:03.062455 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:40:03.062463 | orchestrator | 2026-03-09 00:40:03.062473 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:40:03.062488 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:40:03.062503 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:40:03.062516 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:40:03.062554 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:40:03.062567 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:40:03.062580 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:40:03.062594 | orchestrator | 2026-03-09 00:40:03.062607 | orchestrator | 2026-03-09 00:40:03.062620 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:40:03.062634 | orchestrator | Monday 09 March 2026 00:40:02 +0000 (0:00:00.036) 0:00:05.795 ********** 2026-03-09 00:40:03.062656 | orchestrator | =============================================================================== 2026-03-09 00:40:03.062669 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.16s 2026-03-09 00:40:03.062682 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.80s 2026-03-09 00:40:03.062695 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2026-03-09 00:40:03.380699 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-09 00:40:15.533518 | orchestrator | 2026-03-09 00:40:15 | INFO  | Task 7a161bde-6383-4bc6-a99d-a0de80a46740 (wait-for-connection) was prepared for execution. 2026-03-09 00:40:15.533677 | orchestrator | 2026-03-09 00:40:15 | INFO  | It takes a moment until task 7a161bde-6383-4bc6-a99d-a0de80a46740 (wait-for-connection) has been started and output is visible here. 2026-03-09 00:40:31.771584 | orchestrator | 2026-03-09 00:40:31.771747 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-09 00:40:31.771768 | orchestrator | 2026-03-09 00:40:31.771795 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-09 00:40:31.771807 | orchestrator | Monday 09 March 2026 00:40:19 +0000 (0:00:00.209) 0:00:00.209 ********** 2026-03-09 00:40:31.771833 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:40:31.771846 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:40:31.771857 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:40:31.771868 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:40:31.771879 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:40:31.771889 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:40:31.771900 | orchestrator | 2026-03-09 00:40:31.771911 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:40:31.771923 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:40:31.771936 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:40:31.771947 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:40:31.771958 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:40:31.771969 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:40:31.771980 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:40:31.771991 | orchestrator | 2026-03-09 00:40:31.772002 | orchestrator | 2026-03-09 00:40:31.772013 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:40:31.772024 | orchestrator | Monday 09 March 2026 00:40:31 +0000 (0:00:11.597) 0:00:11.806 ********** 2026-03-09 00:40:31.772035 | orchestrator | =============================================================================== 2026-03-09 00:40:31.772046 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.60s 2026-03-09 00:40:32.082889 | orchestrator | + osism apply hddtemp 2026-03-09 00:40:44.239313 | orchestrator | 2026-03-09 00:40:44 | INFO  | Task cb0807b1-d886-4dfe-8249-b2c29de5e992 (hddtemp) was prepared for execution. 2026-03-09 00:40:44.239444 | orchestrator | 2026-03-09 00:40:44 | INFO  | It takes a moment until task cb0807b1-d886-4dfe-8249-b2c29de5e992 (hddtemp) has been started and output is visible here. 2026-03-09 00:41:10.840667 | orchestrator | 2026-03-09 00:41:10.840763 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-09 00:41:10.840776 | orchestrator | 2026-03-09 00:41:10.840785 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-09 00:41:10.840794 | orchestrator | Monday 09 March 2026 00:40:48 +0000 (0:00:00.277) 0:00:00.277 ********** 2026-03-09 00:41:10.840803 | orchestrator | ok: [testbed-manager] 2026-03-09 00:41:10.840812 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:41:10.840820 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:41:10.840828 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:41:10.840836 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:41:10.840844 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:41:10.840851 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:41:10.840859 | orchestrator | 2026-03-09 00:41:10.840867 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-09 00:41:10.840875 | orchestrator | Monday 09 March 2026 00:40:49 +0000 (0:00:00.752) 0:00:01.029 ********** 2026-03-09 00:41:10.840885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:41:10.840917 | orchestrator | 2026-03-09 00:41:10.840926 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-09 00:41:10.840934 | orchestrator | Monday 09 March 2026 00:40:50 +0000 (0:00:01.208) 0:00:02.237 ********** 2026-03-09 00:41:10.840942 | orchestrator | ok: [testbed-manager] 2026-03-09 00:41:10.840949 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:41:10.840957 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:41:10.840965 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:41:10.840973 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:41:10.840981 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:41:10.840989 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:41:10.840997 | orchestrator | 2026-03-09 00:41:10.841004 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-09 00:41:10.841024 | orchestrator | Monday 09 March 2026 00:40:52 +0000 (0:00:01.699) 0:00:03.937 ********** 2026-03-09 00:41:10.841033 | orchestrator | changed: [testbed-manager] 2026-03-09 00:41:10.841042 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:41:10.841050 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:41:10.841058 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:41:10.841081 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:41:10.841089 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:41:10.841097 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:41:10.841105 | orchestrator | 2026-03-09 00:41:10.841113 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-09 00:41:10.841121 | orchestrator | Monday 09 March 2026 00:40:53 +0000 (0:00:01.051) 0:00:04.989 ********** 2026-03-09 00:41:10.841129 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:41:10.841137 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:41:10.841144 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:41:10.841152 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:41:10.841160 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:41:10.841168 | orchestrator | ok: [testbed-manager] 2026-03-09 00:41:10.841177 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:41:10.841186 | orchestrator | 2026-03-09 00:41:10.841196 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-09 00:41:10.841205 | orchestrator | Monday 09 March 2026 00:40:54 +0000 (0:00:01.093) 0:00:06.083 ********** 2026-03-09 00:41:10.841230 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:41:10.841249 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:41:10.841267 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:41:10.841277 | orchestrator | changed: [testbed-manager] 2026-03-09 00:41:10.841286 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:10.841296 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:41:10.841305 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:41:10.841314 | orchestrator | 2026-03-09 00:41:10.841323 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-09 00:41:10.841332 | orchestrator | Monday 09 March 2026 00:40:55 +0000 (0:00:00.698) 0:00:06.781 ********** 2026-03-09 00:41:10.841341 | orchestrator | changed: [testbed-manager] 2026-03-09 00:41:10.841350 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:41:10.841359 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:41:10.841368 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:41:10.841376 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:41:10.841385 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:41:10.841394 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:41:10.841403 | orchestrator | 2026-03-09 00:41:10.841412 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-09 00:41:10.841421 | orchestrator | Monday 09 March 2026 00:41:07 +0000 (0:00:11.953) 0:00:18.735 ********** 2026-03-09 00:41:10.841431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:41:10.841447 | orchestrator | 2026-03-09 00:41:10.841457 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-09 00:41:10.841465 | orchestrator | Monday 09 March 2026 00:41:08 +0000 (0:00:01.279) 0:00:20.014 ********** 2026-03-09 00:41:10.841472 | orchestrator | changed: [testbed-manager] 2026-03-09 00:41:10.841480 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:41:10.841489 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:41:10.841496 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:41:10.841504 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:41:10.841512 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:41:10.841543 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:41:10.841553 | orchestrator | 2026-03-09 00:41:10.841561 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:41:10.841569 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:41:10.841594 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:41:10.841603 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:41:10.841611 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:41:10.841619 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:41:10.841627 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:41:10.841635 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:41:10.841643 | orchestrator | 2026-03-09 00:41:10.841651 | orchestrator | 2026-03-09 00:41:10.841659 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:41:10.841667 | orchestrator | Monday 09 March 2026 00:41:10 +0000 (0:00:01.916) 0:00:21.931 ********** 2026-03-09 00:41:10.841677 | orchestrator | =============================================================================== 2026-03-09 00:41:10.841690 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.95s 2026-03-09 00:41:10.841703 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.92s 2026-03-09 00:41:10.841716 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.70s 2026-03-09 00:41:10.841734 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.28s 2026-03-09 00:41:10.841747 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2026-03-09 00:41:10.841760 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.09s 2026-03-09 00:41:10.841773 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.05s 2026-03-09 00:41:10.841786 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.75s 2026-03-09 00:41:10.841799 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.70s 2026-03-09 00:41:11.172859 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-09 00:41:11.221850 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-09 00:41:11.221945 | orchestrator | + sudo systemctl restart manager.service 2026-03-09 00:41:24.886860 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-09 00:41:24.886978 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-09 00:41:24.886994 | orchestrator | + local max_attempts=60 2026-03-09 00:41:24.887028 | orchestrator | + local name=ceph-ansible 2026-03-09 00:41:24.887050 | orchestrator | + local attempt_num=1 2026-03-09 00:41:24.887062 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:41:24.943331 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:41:24.943397 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:41:24.943405 | orchestrator | + sleep 5 2026-03-09 00:41:29.948575 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:41:29.981111 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:41:29.981189 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:41:29.981200 | orchestrator | + sleep 5 2026-03-09 00:41:34.984010 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:41:35.010833 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:41:35.010925 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:41:35.010940 | orchestrator | + sleep 5 2026-03-09 00:41:40.014219 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:41:40.053982 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:41:40.054116 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:41:40.054130 | orchestrator | + sleep 5 2026-03-09 00:41:45.058310 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:41:45.095473 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:41:45.095658 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:41:45.095684 | orchestrator | + sleep 5 2026-03-09 00:41:50.099789 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:41:50.141835 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:41:50.141910 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:41:50.141920 | orchestrator | + sleep 5 2026-03-09 00:41:55.146951 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:41:55.192206 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:41:55.192301 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:41:55.192749 | orchestrator | + sleep 5 2026-03-09 00:42:00.198158 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:42:00.249007 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:42:00.249091 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:42:00.249103 | orchestrator | + sleep 5 2026-03-09 00:42:05.251786 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:42:05.294069 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:42:05.294153 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:42:05.294166 | orchestrator | + sleep 5 2026-03-09 00:42:10.296903 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:42:10.330267 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:42:10.330393 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:42:10.330419 | orchestrator | + sleep 5 2026-03-09 00:42:15.336005 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:42:15.362703 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:42:15.362788 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:42:15.362801 | orchestrator | + sleep 5 2026-03-09 00:42:20.369002 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:42:20.408194 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:42:20.408901 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:42:20.408972 | orchestrator | + sleep 5 2026-03-09 00:42:25.413253 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:42:25.449242 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:42:25.449304 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:42:25.449314 | orchestrator | + sleep 5 2026-03-09 00:42:30.454322 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:42:30.492902 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:42:30.493029 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-09 00:42:30.493055 | orchestrator | + local max_attempts=60 2026-03-09 00:42:30.493074 | orchestrator | + local name=kolla-ansible 2026-03-09 00:42:30.493092 | orchestrator | + local attempt_num=1 2026-03-09 00:42:30.493897 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-09 00:42:30.531668 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:42:30.531830 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-09 00:42:30.531884 | orchestrator | + local max_attempts=60 2026-03-09 00:42:30.531897 | orchestrator | + local name=osism-ansible 2026-03-09 00:42:30.531909 | orchestrator | + local attempt_num=1 2026-03-09 00:42:30.531932 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-09 00:42:30.569817 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:42:30.569941 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-09 00:42:30.569966 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-09 00:42:30.750008 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-09 00:42:30.906673 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-09 00:42:31.083974 | orchestrator | ARA in osism-ansible already disabled. 2026-03-09 00:42:31.246227 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-09 00:42:31.247023 | orchestrator | + osism apply gather-facts 2026-03-09 00:42:43.495053 | orchestrator | 2026-03-09 00:42:43 | INFO  | Task b1d75f32-e98a-4a31-b870-5c563e8692ff (gather-facts) was prepared for execution. 2026-03-09 00:42:43.495159 | orchestrator | 2026-03-09 00:42:43 | INFO  | It takes a moment until task b1d75f32-e98a-4a31-b870-5c563e8692ff (gather-facts) has been started and output is visible here. 2026-03-09 00:42:56.540124 | orchestrator | 2026-03-09 00:42:56.540239 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-09 00:42:56.540256 | orchestrator | 2026-03-09 00:42:56.540269 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:42:56.540280 | orchestrator | Monday 09 March 2026 00:42:47 +0000 (0:00:00.218) 0:00:00.218 ********** 2026-03-09 00:42:56.540292 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:42:56.540304 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:42:56.540315 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:42:56.540326 | orchestrator | ok: [testbed-manager] 2026-03-09 00:42:56.540337 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:42:56.540348 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:42:56.540358 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:42:56.540369 | orchestrator | 2026-03-09 00:42:56.540380 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-09 00:42:56.540391 | orchestrator | 2026-03-09 00:42:56.540402 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-09 00:42:56.540413 | orchestrator | Monday 09 March 2026 00:42:55 +0000 (0:00:07.633) 0:00:07.851 ********** 2026-03-09 00:42:56.540424 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:42:56.540436 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:42:56.540447 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:42:56.540458 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:42:56.540469 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:42:56.540479 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:56.540495 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:56.540545 | orchestrator | 2026-03-09 00:42:56.540565 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:42:56.540585 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:42:56.540607 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:42:56.540626 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:42:56.540645 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:42:56.540661 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:42:56.540672 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:42:56.540709 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:42:56.540720 | orchestrator | 2026-03-09 00:42:56.540731 | orchestrator | 2026-03-09 00:42:56.540742 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:42:56.540753 | orchestrator | Monday 09 March 2026 00:42:56 +0000 (0:00:00.568) 0:00:08.420 ********** 2026-03-09 00:42:56.540764 | orchestrator | =============================================================================== 2026-03-09 00:42:56.540775 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.63s 2026-03-09 00:42:56.540786 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-03-09 00:42:56.878477 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-09 00:42:56.894240 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-09 00:42:56.905688 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-09 00:42:56.921403 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-09 00:42:56.932335 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-09 00:42:56.943816 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-09 00:42:56.958119 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-09 00:42:56.968390 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-09 00:42:56.979609 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-09 00:42:56.992020 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-09 00:42:57.006785 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-09 00:42:57.017979 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-09 00:42:57.029393 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-09 00:42:57.040902 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-09 00:42:57.053797 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-09 00:42:57.065993 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-09 00:42:57.085407 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-09 00:42:57.100671 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-09 00:42:57.116167 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-09 00:42:57.133919 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-09 00:42:57.149038 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-09 00:42:57.172617 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-09 00:42:57.193304 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-09 00:42:57.207499 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-09 00:42:57.477170 | orchestrator | ok: Runtime: 0:24:32.624750 2026-03-09 00:42:57.581308 | 2026-03-09 00:42:57.581480 | TASK [Deploy services] 2026-03-09 00:42:58.114815 | orchestrator | skipping: Conditional result was False 2026-03-09 00:42:58.133680 | 2026-03-09 00:42:58.133855 | TASK [Deploy in a nutshell] 2026-03-09 00:42:58.888709 | orchestrator | + set -e 2026-03-09 00:42:58.888834 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-09 00:42:58.888845 | orchestrator | ++ export INTERACTIVE=false 2026-03-09 00:42:58.888854 | orchestrator | ++ INTERACTIVE=false 2026-03-09 00:42:58.888860 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-09 00:42:58.888865 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-09 00:42:58.888878 | orchestrator | + source /opt/manager-vars.sh 2026-03-09 00:42:58.888900 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-09 00:42:58.888911 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-09 00:42:58.888917 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-09 00:42:58.888923 | orchestrator | ++ CEPH_VERSION=reef 2026-03-09 00:42:58.888927 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-09 00:42:58.888935 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-09 00:42:58.888939 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-09 00:42:58.888948 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-09 00:42:58.888952 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-09 00:42:58.889030 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-09 00:42:58.889036 | orchestrator | ++ export ARA=false 2026-03-09 00:42:58.889040 | orchestrator | ++ ARA=false 2026-03-09 00:42:58.889045 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-09 00:42:58.889050 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-09 00:42:58.889053 | orchestrator | ++ export TEMPEST=true 2026-03-09 00:42:58.889057 | orchestrator | ++ TEMPEST=true 2026-03-09 00:42:58.889061 | orchestrator | ++ export IS_ZUUL=true 2026-03-09 00:42:58.889126 | orchestrator | ++ IS_ZUUL=true 2026-03-09 00:42:58.889226 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.70 2026-03-09 00:42:58.889233 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.70 2026-03-09 00:42:58.889237 | orchestrator | ++ export EXTERNAL_API=false 2026-03-09 00:42:58.889240 | orchestrator | ++ EXTERNAL_API=false 2026-03-09 00:42:58.889244 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-09 00:42:58.889248 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-09 00:42:58.889252 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-09 00:42:58.889256 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-09 00:42:58.889260 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-09 00:42:58.889264 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-09 00:42:58.889270 | orchestrator | 2026-03-09 00:42:58.889274 | orchestrator | # PULL IMAGES 2026-03-09 00:42:58.889278 | orchestrator | 2026-03-09 00:42:58.889282 | orchestrator | + echo 2026-03-09 00:42:58.889286 | orchestrator | + echo '# PULL IMAGES' 2026-03-09 00:42:58.889290 | orchestrator | + echo 2026-03-09 00:42:58.890448 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-09 00:42:58.938087 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-09 00:42:58.938203 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-09 00:43:00.929643 | orchestrator | 2026-03-09 00:43:00 | INFO  | Trying to run play pull-images in environment custom 2026-03-09 00:43:11.008979 | orchestrator | 2026-03-09 00:43:11 | INFO  | Task 9e3c48f5-77da-4b6b-a931-364574bf47f2 (pull-images) was prepared for execution. 2026-03-09 00:43:11.009066 | orchestrator | 2026-03-09 00:43:11 | INFO  | Task 9e3c48f5-77da-4b6b-a931-364574bf47f2 is running in background. No more output. Check ARA for logs. 2026-03-09 00:43:13.352895 | orchestrator | 2026-03-09 00:43:13 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-09 00:43:23.496314 | orchestrator | 2026-03-09 00:43:23 | INFO  | Task d31f6b85-5773-4e58-8523-dce0cee008a9 (wipe-partitions) was prepared for execution. 2026-03-09 00:43:23.496413 | orchestrator | 2026-03-09 00:43:23 | INFO  | It takes a moment until task d31f6b85-5773-4e58-8523-dce0cee008a9 (wipe-partitions) has been started and output is visible here. 2026-03-09 00:43:37.817412 | orchestrator | 2026-03-09 00:43:37.817583 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-09 00:43:37.817611 | orchestrator | 2026-03-09 00:43:37.817624 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-09 00:43:37.817644 | orchestrator | Monday 09 March 2026 00:43:28 +0000 (0:00:00.133) 0:00:00.133 ********** 2026-03-09 00:43:37.817658 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:43:37.817671 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:43:37.817683 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:43:37.817694 | orchestrator | 2026-03-09 00:43:37.817706 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-09 00:43:37.817744 | orchestrator | Monday 09 March 2026 00:43:28 +0000 (0:00:00.638) 0:00:00.772 ********** 2026-03-09 00:43:37.817756 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:37.817767 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:37.817779 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:37.817794 | orchestrator | 2026-03-09 00:43:37.817805 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-09 00:43:37.817816 | orchestrator | Monday 09 March 2026 00:43:29 +0000 (0:00:00.445) 0:00:01.217 ********** 2026-03-09 00:43:37.817827 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:43:37.817839 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:43:37.817850 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:43:37.817860 | orchestrator | 2026-03-09 00:43:37.817872 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-09 00:43:37.817883 | orchestrator | Monday 09 March 2026 00:43:29 +0000 (0:00:00.631) 0:00:01.849 ********** 2026-03-09 00:43:37.817894 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:37.817905 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:37.817916 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:43:37.817927 | orchestrator | 2026-03-09 00:43:37.817938 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-09 00:43:37.817949 | orchestrator | Monday 09 March 2026 00:43:30 +0000 (0:00:00.266) 0:00:02.115 ********** 2026-03-09 00:43:37.817960 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-09 00:43:37.817975 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-09 00:43:37.817986 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-09 00:43:37.817997 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-09 00:43:37.818008 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-09 00:43:37.818118 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-09 00:43:37.818130 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-09 00:43:37.818141 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-09 00:43:37.818152 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-09 00:43:37.818163 | orchestrator | 2026-03-09 00:43:37.818173 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-09 00:43:37.818184 | orchestrator | Monday 09 March 2026 00:43:31 +0000 (0:00:01.276) 0:00:03.392 ********** 2026-03-09 00:43:37.818196 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-09 00:43:37.818207 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-09 00:43:37.818218 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-09 00:43:37.818229 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-09 00:43:37.818239 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-09 00:43:37.818250 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-09 00:43:37.818261 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-09 00:43:37.818272 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-09 00:43:37.818282 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-09 00:43:37.818293 | orchestrator | 2026-03-09 00:43:37.818304 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-09 00:43:37.818315 | orchestrator | Monday 09 March 2026 00:43:33 +0000 (0:00:01.598) 0:00:04.990 ********** 2026-03-09 00:43:37.818326 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-09 00:43:37.818337 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-09 00:43:37.818347 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-09 00:43:37.818358 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-09 00:43:37.818376 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-09 00:43:37.818387 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-09 00:43:37.818398 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-09 00:43:37.818409 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-09 00:43:37.818433 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-09 00:43:37.818445 | orchestrator | 2026-03-09 00:43:37.818455 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-09 00:43:37.818466 | orchestrator | Monday 09 March 2026 00:43:36 +0000 (0:00:03.050) 0:00:08.041 ********** 2026-03-09 00:43:37.818477 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:43:37.818488 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:43:37.818499 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:43:37.818540 | orchestrator | 2026-03-09 00:43:37.818552 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-09 00:43:37.818564 | orchestrator | Monday 09 March 2026 00:43:36 +0000 (0:00:00.646) 0:00:08.687 ********** 2026-03-09 00:43:37.818575 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:43:37.818586 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:43:37.818597 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:43:37.818608 | orchestrator | 2026-03-09 00:43:37.818618 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:43:37.818631 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:43:37.818645 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:43:37.818676 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:43:37.818688 | orchestrator | 2026-03-09 00:43:37.818699 | orchestrator | 2026-03-09 00:43:37.818710 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:43:37.818721 | orchestrator | Monday 09 March 2026 00:43:37 +0000 (0:00:00.633) 0:00:09.321 ********** 2026-03-09 00:43:37.818732 | orchestrator | =============================================================================== 2026-03-09 00:43:37.818743 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.05s 2026-03-09 00:43:37.818754 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.60s 2026-03-09 00:43:37.818765 | orchestrator | Check device availability ----------------------------------------------- 1.28s 2026-03-09 00:43:37.818776 | orchestrator | Reload udev rules ------------------------------------------------------- 0.65s 2026-03-09 00:43:37.818787 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.64s 2026-03-09 00:43:37.818797 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2026-03-09 00:43:37.818808 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.63s 2026-03-09 00:43:37.818819 | orchestrator | Remove all rook related logical devices --------------------------------- 0.45s 2026-03-09 00:43:37.818830 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2026-03-09 00:43:50.190787 | orchestrator | 2026-03-09 00:43:50 | INFO  | Task 74a018a3-3c64-439d-8d6e-e475d6342b9c (facts) was prepared for execution. 2026-03-09 00:43:50.190903 | orchestrator | 2026-03-09 00:43:50 | INFO  | It takes a moment until task 74a018a3-3c64-439d-8d6e-e475d6342b9c (facts) has been started and output is visible here. 2026-03-09 00:44:02.755154 | orchestrator | 2026-03-09 00:44:02.755333 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-09 00:44:02.755358 | orchestrator | 2026-03-09 00:44:02.755371 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-09 00:44:02.756143 | orchestrator | Monday 09 March 2026 00:43:54 +0000 (0:00:00.300) 0:00:00.300 ********** 2026-03-09 00:44:02.756203 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:44:02.756211 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:44:02.756217 | orchestrator | ok: [testbed-manager] 2026-03-09 00:44:02.756222 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:44:02.756243 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:44:02.756248 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:44:02.756253 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:02.756258 | orchestrator | 2026-03-09 00:44:02.756266 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-09 00:44:02.756271 | orchestrator | Monday 09 March 2026 00:43:55 +0000 (0:00:01.087) 0:00:01.387 ********** 2026-03-09 00:44:02.756276 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:44:02.756281 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:44:02.756287 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:44:02.756291 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:44:02.756296 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:02.756301 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.756306 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:02.756310 | orchestrator | 2026-03-09 00:44:02.756315 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-09 00:44:02.756318 | orchestrator | 2026-03-09 00:44:02.756322 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:44:02.756326 | orchestrator | Monday 09 March 2026 00:43:57 +0000 (0:00:01.423) 0:00:02.811 ********** 2026-03-09 00:44:02.756330 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:44:02.756334 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:44:02.756337 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:44:02.756342 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:44:02.756346 | orchestrator | ok: [testbed-manager] 2026-03-09 00:44:02.756349 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:44:02.756353 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:02.756357 | orchestrator | 2026-03-09 00:44:02.756361 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-09 00:44:02.756365 | orchestrator | 2026-03-09 00:44:02.756368 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-09 00:44:02.756380 | orchestrator | Monday 09 March 2026 00:44:01 +0000 (0:00:04.618) 0:00:07.429 ********** 2026-03-09 00:44:02.756384 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:44:02.756388 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:44:02.756392 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:44:02.756396 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:44:02.756400 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:02.756404 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.756408 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:02.756411 | orchestrator | 2026-03-09 00:44:02.756415 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:44:02.756420 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:44:02.756426 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:44:02.756430 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:44:02.756434 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:44:02.756438 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:44:02.756442 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:44:02.756446 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:44:02.756450 | orchestrator | 2026-03-09 00:44:02.756454 | orchestrator | 2026-03-09 00:44:02.756458 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:44:02.756466 | orchestrator | Monday 09 March 2026 00:44:02 +0000 (0:00:00.543) 0:00:07.973 ********** 2026-03-09 00:44:02.756473 | orchestrator | =============================================================================== 2026-03-09 00:44:02.756479 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.62s 2026-03-09 00:44:02.756486 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.42s 2026-03-09 00:44:02.756491 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2026-03-09 00:44:02.756498 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-03-09 00:44:05.282286 | orchestrator | 2026-03-09 00:44:05 | INFO  | Task 2640ece6-43c3-4b02-abe8-c6f6c08302a0 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-09 00:44:05.282379 | orchestrator | 2026-03-09 00:44:05 | INFO  | It takes a moment until task 2640ece6-43c3-4b02-abe8-c6f6c08302a0 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-09 00:44:16.744407 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-09 00:44:16.744527 | orchestrator | 2.16.14 2026-03-09 00:44:16.744546 | orchestrator | 2026-03-09 00:44:16.744558 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-09 00:44:16.744569 | orchestrator | 2026-03-09 00:44:16.744583 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:44:16.744595 | orchestrator | Monday 09 March 2026 00:44:09 +0000 (0:00:00.342) 0:00:00.342 ********** 2026-03-09 00:44:16.744606 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-09 00:44:16.744617 | orchestrator | 2026-03-09 00:44:16.744628 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:44:16.744639 | orchestrator | Monday 09 March 2026 00:44:10 +0000 (0:00:00.249) 0:00:00.591 ********** 2026-03-09 00:44:16.744650 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:44:16.744661 | orchestrator | 2026-03-09 00:44:16.744672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:16.744683 | orchestrator | Monday 09 March 2026 00:44:10 +0000 (0:00:00.244) 0:00:00.836 ********** 2026-03-09 00:44:16.744694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-09 00:44:16.744705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-09 00:44:16.744716 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-09 00:44:16.744727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-09 00:44:16.744738 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-09 00:44:16.744749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-09 00:44:16.744759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-09 00:44:16.744770 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-09 00:44:16.744781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-09 00:44:16.744792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-09 00:44:16.744810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-09 00:44:16.744822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-09 00:44:16.744833 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-09 00:44:16.744843 | orchestrator | 2026-03-09 00:44:16.744854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:16.744865 | orchestrator | Monday 09 March 2026 00:44:10 +0000 (0:00:00.492) 0:00:01.328 ********** 2026-03-09 00:44:16.744897 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:16.744909 | orchestrator | 2026-03-09 00:44:16.744920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:16.744931 | orchestrator | Monday 09 March 2026 00:44:11 +0000 (0:00:00.200) 0:00:01.528 ********** 2026-03-09 00:44:16.744941 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:16.744954 | orchestrator | 2026-03-09 00:44:16.744967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:16.744979 | orchestrator | Monday 09 March 2026 00:44:11 +0000 (0:00:00.200) 0:00:01.729 ********** 2026-03-09 00:44:16.744992 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:16.745005 | orchestrator | 2026-03-09 00:44:16.745018 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:16.745042 | orchestrator | Monday 09 March 2026 00:44:11 +0000 (0:00:00.218) 0:00:01.947 ********** 2026-03-09 00:44:16.745060 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:16.745072 | orchestrator | 2026-03-09 00:44:16.745086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:16.745099 | orchestrator | Monday 09 March 2026 00:44:11 +0000 (0:00:00.200) 0:00:02.148 ********** 2026-03-09 00:44:16.745111 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:16.745124 | orchestrator | 2026-03-09 00:44:16.745137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:16.745150 | orchestrator | Monday 09 March 2026 00:44:11 +0000 (0:00:00.178) 0:00:02.327 ********** 2026-03-09 00:44:16.745163 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:16.745176 | orchestrator | 2026-03-09 00:44:16.745189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:16.745202 | orchestrator | Monday 09 March 2026 00:44:12 +0000 (0:00:00.175) 0:00:02.502 ********** 2026-03-09 00:44:16.745214 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:16.745227 | orchestrator | 2026-03-09 00:44:16.745240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:16.745253 | orchestrator | Monday 09 March 2026 00:44:12 +0000 (0:00:00.179) 0:00:02.681 ********** 2026-03-09 00:44:16.745265 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:16.745278 | orchestrator | 2026-03-09 00:44:16.745292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:16.745305 | orchestrator | Monday 09 March 2026 00:44:12 +0000 (0:00:00.200) 0:00:02.882 ********** 2026-03-09 00:44:16.745316 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3) 2026-03-09 00:44:16.745328 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3) 2026-03-09 00:44:16.745339 | orchestrator | 2026-03-09 00:44:16.745350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:16.745375 | orchestrator | Monday 09 March 2026 00:44:12 +0000 (0:00:00.405) 0:00:03.287 ********** 2026-03-09 00:44:16.745387 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_741bb6ef-88fa-4baa-bfac-ed82f0dadf29) 2026-03-09 00:44:16.745398 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_741bb6ef-88fa-4baa-bfac-ed82f0dadf29) 2026-03-09 00:44:16.745409 | orchestrator | 2026-03-09 00:44:16.745420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:16.745430 | orchestrator | Monday 09 March 2026 00:44:13 +0000 (0:00:00.555) 0:00:03.842 ********** 2026-03-09 00:44:16.745441 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_320449d2-61ff-46fc-8f0d-ef8de6be542f) 2026-03-09 00:44:16.745452 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_320449d2-61ff-46fc-8f0d-ef8de6be542f) 2026-03-09 00:44:16.745462 | orchestrator | 2026-03-09 00:44:16.745473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:16.745484 | orchestrator | Monday 09 March 2026 00:44:13 +0000 (0:00:00.569) 0:00:04.412 ********** 2026-03-09 00:44:16.745519 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_17d99fae-d184-430d-aac6-01476d40e112) 2026-03-09 00:44:16.745531 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_17d99fae-d184-430d-aac6-01476d40e112) 2026-03-09 00:44:16.745542 | orchestrator | 2026-03-09 00:44:16.745553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:16.745564 | orchestrator | Monday 09 March 2026 00:44:14 +0000 (0:00:00.718) 0:00:05.131 ********** 2026-03-09 00:44:16.745574 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:44:16.745585 | orchestrator | 2026-03-09 00:44:16.745601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:16.745613 | orchestrator | Monday 09 March 2026 00:44:14 +0000 (0:00:00.323) 0:00:05.454 ********** 2026-03-09 00:44:16.745624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-09 00:44:16.745634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-09 00:44:16.745645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-09 00:44:16.745656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-09 00:44:16.745666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-09 00:44:16.745677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-09 00:44:16.745688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-09 00:44:16.745699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-09 00:44:16.745709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-09 00:44:16.745720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-09 00:44:16.745731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-09 00:44:16.745741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-09 00:44:16.745752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-09 00:44:16.745763 | orchestrator | 2026-03-09 00:44:16.745774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:16.745785 | orchestrator | Monday 09 March 2026 00:44:15 +0000 (0:00:00.354) 0:00:05.808 ********** 2026-03-09 00:44:16.745795 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:16.745806 | orchestrator | 2026-03-09 00:44:16.745817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:16.745828 | orchestrator | Monday 09 March 2026 00:44:15 +0000 (0:00:00.194) 0:00:06.003 ********** 2026-03-09 00:44:16.745838 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:16.745849 | orchestrator | 2026-03-09 00:44:16.745860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:16.745870 | orchestrator | Monday 09 March 2026 00:44:15 +0000 (0:00:00.218) 0:00:06.221 ********** 2026-03-09 00:44:16.745881 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:16.745892 | orchestrator | 2026-03-09 00:44:16.745903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:16.745914 | orchestrator | Monday 09 March 2026 00:44:15 +0000 (0:00:00.188) 0:00:06.410 ********** 2026-03-09 00:44:16.745925 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:16.745935 | orchestrator | 2026-03-09 00:44:16.745946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:16.745957 | orchestrator | Monday 09 March 2026 00:44:16 +0000 (0:00:00.188) 0:00:06.598 ********** 2026-03-09 00:44:16.745968 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:16.745986 | orchestrator | 2026-03-09 00:44:16.745997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:16.746008 | orchestrator | Monday 09 March 2026 00:44:16 +0000 (0:00:00.182) 0:00:06.781 ********** 2026-03-09 00:44:16.746073 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:16.746086 | orchestrator | 2026-03-09 00:44:16.746097 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:16.746108 | orchestrator | Monday 09 March 2026 00:44:16 +0000 (0:00:00.187) 0:00:06.968 ********** 2026-03-09 00:44:16.746152 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:16.746163 | orchestrator | 2026-03-09 00:44:16.746182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:25.089270 | orchestrator | Monday 09 March 2026 00:44:16 +0000 (0:00:00.244) 0:00:07.213 ********** 2026-03-09 00:44:25.089385 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.089404 | orchestrator | 2026-03-09 00:44:25.089417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:25.089430 | orchestrator | Monday 09 March 2026 00:44:17 +0000 (0:00:00.287) 0:00:07.500 ********** 2026-03-09 00:44:25.089441 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-09 00:44:25.089453 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-09 00:44:25.089464 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-09 00:44:25.089475 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-09 00:44:25.089486 | orchestrator | 2026-03-09 00:44:25.089498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:25.089545 | orchestrator | Monday 09 March 2026 00:44:18 +0000 (0:00:01.181) 0:00:08.681 ********** 2026-03-09 00:44:25.089556 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.089568 | orchestrator | 2026-03-09 00:44:25.089579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:25.089590 | orchestrator | Monday 09 March 2026 00:44:18 +0000 (0:00:00.215) 0:00:08.897 ********** 2026-03-09 00:44:25.089601 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.089612 | orchestrator | 2026-03-09 00:44:25.089623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:25.089634 | orchestrator | Monday 09 March 2026 00:44:18 +0000 (0:00:00.200) 0:00:09.098 ********** 2026-03-09 00:44:25.089645 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.089656 | orchestrator | 2026-03-09 00:44:25.089667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:25.089679 | orchestrator | Monday 09 March 2026 00:44:18 +0000 (0:00:00.214) 0:00:09.312 ********** 2026-03-09 00:44:25.089690 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.089701 | orchestrator | 2026-03-09 00:44:25.089712 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-09 00:44:25.089723 | orchestrator | Monday 09 March 2026 00:44:19 +0000 (0:00:00.197) 0:00:09.510 ********** 2026-03-09 00:44:25.089734 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-09 00:44:25.089745 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-09 00:44:25.089756 | orchestrator | 2026-03-09 00:44:25.089778 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-09 00:44:25.089790 | orchestrator | Monday 09 March 2026 00:44:19 +0000 (0:00:00.176) 0:00:09.686 ********** 2026-03-09 00:44:25.089801 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.089812 | orchestrator | 2026-03-09 00:44:25.089823 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-09 00:44:25.089834 | orchestrator | Monday 09 March 2026 00:44:19 +0000 (0:00:00.131) 0:00:09.818 ********** 2026-03-09 00:44:25.089845 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.089856 | orchestrator | 2026-03-09 00:44:25.089867 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-09 00:44:25.089878 | orchestrator | Monday 09 March 2026 00:44:19 +0000 (0:00:00.130) 0:00:09.948 ********** 2026-03-09 00:44:25.089915 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.089934 | orchestrator | 2026-03-09 00:44:25.089952 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-09 00:44:25.089970 | orchestrator | Monday 09 March 2026 00:44:19 +0000 (0:00:00.132) 0:00:10.081 ********** 2026-03-09 00:44:25.089989 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:44:25.090008 | orchestrator | 2026-03-09 00:44:25.090086 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-09 00:44:25.090099 | orchestrator | Monday 09 March 2026 00:44:19 +0000 (0:00:00.131) 0:00:10.212 ********** 2026-03-09 00:44:25.090111 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a76ca51e-4549-54be-bcb5-a2c49bca5f85'}}) 2026-03-09 00:44:25.090123 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '30c2fd4e-0770-5a21-8e5f-9ea8386abee3'}}) 2026-03-09 00:44:25.090134 | orchestrator | 2026-03-09 00:44:25.090145 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-09 00:44:25.090156 | orchestrator | Monday 09 March 2026 00:44:19 +0000 (0:00:00.167) 0:00:10.380 ********** 2026-03-09 00:44:25.090204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a76ca51e-4549-54be-bcb5-a2c49bca5f85'}})  2026-03-09 00:44:25.090223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '30c2fd4e-0770-5a21-8e5f-9ea8386abee3'}})  2026-03-09 00:44:25.090235 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.090246 | orchestrator | 2026-03-09 00:44:25.090257 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-09 00:44:25.090268 | orchestrator | Monday 09 March 2026 00:44:20 +0000 (0:00:00.143) 0:00:10.523 ********** 2026-03-09 00:44:25.090279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a76ca51e-4549-54be-bcb5-a2c49bca5f85'}})  2026-03-09 00:44:25.090291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '30c2fd4e-0770-5a21-8e5f-9ea8386abee3'}})  2026-03-09 00:44:25.090309 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.090325 | orchestrator | 2026-03-09 00:44:25.090336 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-09 00:44:25.090347 | orchestrator | Monday 09 March 2026 00:44:20 +0000 (0:00:00.395) 0:00:10.919 ********** 2026-03-09 00:44:25.090357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a76ca51e-4549-54be-bcb5-a2c49bca5f85'}})  2026-03-09 00:44:25.090388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '30c2fd4e-0770-5a21-8e5f-9ea8386abee3'}})  2026-03-09 00:44:25.090400 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.090411 | orchestrator | 2026-03-09 00:44:25.090422 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-09 00:44:25.090439 | orchestrator | Monday 09 March 2026 00:44:20 +0000 (0:00:00.158) 0:00:11.077 ********** 2026-03-09 00:44:25.090450 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:44:25.090461 | orchestrator | 2026-03-09 00:44:25.090472 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-09 00:44:25.090483 | orchestrator | Monday 09 March 2026 00:44:20 +0000 (0:00:00.139) 0:00:11.217 ********** 2026-03-09 00:44:25.090494 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:44:25.090535 | orchestrator | 2026-03-09 00:44:25.090548 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-09 00:44:25.090559 | orchestrator | Monday 09 March 2026 00:44:20 +0000 (0:00:00.154) 0:00:11.372 ********** 2026-03-09 00:44:25.090570 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.090581 | orchestrator | 2026-03-09 00:44:25.090592 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-09 00:44:25.090603 | orchestrator | Monday 09 March 2026 00:44:21 +0000 (0:00:00.133) 0:00:11.506 ********** 2026-03-09 00:44:25.090624 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.090635 | orchestrator | 2026-03-09 00:44:25.090646 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-09 00:44:25.090658 | orchestrator | Monday 09 March 2026 00:44:21 +0000 (0:00:00.125) 0:00:11.631 ********** 2026-03-09 00:44:25.090669 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.090680 | orchestrator | 2026-03-09 00:44:25.090691 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-09 00:44:25.090701 | orchestrator | Monday 09 March 2026 00:44:21 +0000 (0:00:00.165) 0:00:11.798 ********** 2026-03-09 00:44:25.090713 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:44:25.090723 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:44:25.090735 | orchestrator |  "sdb": { 2026-03-09 00:44:25.090747 | orchestrator |  "osd_lvm_uuid": "a76ca51e-4549-54be-bcb5-a2c49bca5f85" 2026-03-09 00:44:25.090758 | orchestrator |  }, 2026-03-09 00:44:25.090770 | orchestrator |  "sdc": { 2026-03-09 00:44:25.090781 | orchestrator |  "osd_lvm_uuid": "30c2fd4e-0770-5a21-8e5f-9ea8386abee3" 2026-03-09 00:44:25.090792 | orchestrator |  } 2026-03-09 00:44:25.090803 | orchestrator |  } 2026-03-09 00:44:25.090815 | orchestrator | } 2026-03-09 00:44:25.090826 | orchestrator | 2026-03-09 00:44:25.090838 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-09 00:44:25.090849 | orchestrator | Monday 09 March 2026 00:44:21 +0000 (0:00:00.159) 0:00:11.957 ********** 2026-03-09 00:44:25.090860 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.090871 | orchestrator | 2026-03-09 00:44:25.090882 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-09 00:44:25.090893 | orchestrator | Monday 09 March 2026 00:44:21 +0000 (0:00:00.231) 0:00:12.188 ********** 2026-03-09 00:44:25.090903 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.090914 | orchestrator | 2026-03-09 00:44:25.090925 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-09 00:44:25.090936 | orchestrator | Monday 09 March 2026 00:44:21 +0000 (0:00:00.128) 0:00:12.317 ********** 2026-03-09 00:44:25.090947 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:25.090958 | orchestrator | 2026-03-09 00:44:25.090969 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-09 00:44:25.090980 | orchestrator | Monday 09 March 2026 00:44:21 +0000 (0:00:00.131) 0:00:12.448 ********** 2026-03-09 00:44:25.090991 | orchestrator | changed: [testbed-node-3] => { 2026-03-09 00:44:25.091002 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-09 00:44:25.091013 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:44:25.091024 | orchestrator |  "sdb": { 2026-03-09 00:44:25.091035 | orchestrator |  "osd_lvm_uuid": "a76ca51e-4549-54be-bcb5-a2c49bca5f85" 2026-03-09 00:44:25.091046 | orchestrator |  }, 2026-03-09 00:44:25.091057 | orchestrator |  "sdc": { 2026-03-09 00:44:25.091068 | orchestrator |  "osd_lvm_uuid": "30c2fd4e-0770-5a21-8e5f-9ea8386abee3" 2026-03-09 00:44:25.091079 | orchestrator |  } 2026-03-09 00:44:25.091090 | orchestrator |  }, 2026-03-09 00:44:25.091101 | orchestrator |  "lvm_volumes": [ 2026-03-09 00:44:25.091113 | orchestrator |  { 2026-03-09 00:44:25.091124 | orchestrator |  "data": "osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85", 2026-03-09 00:44:25.091135 | orchestrator |  "data_vg": "ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85" 2026-03-09 00:44:25.091146 | orchestrator |  }, 2026-03-09 00:44:25.091156 | orchestrator |  { 2026-03-09 00:44:25.091167 | orchestrator |  "data": "osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3", 2026-03-09 00:44:25.091179 | orchestrator |  "data_vg": "ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3" 2026-03-09 00:44:25.091195 | orchestrator |  } 2026-03-09 00:44:25.091206 | orchestrator |  ] 2026-03-09 00:44:25.091217 | orchestrator |  } 2026-03-09 00:44:25.091229 | orchestrator | } 2026-03-09 00:44:25.091246 | orchestrator | 2026-03-09 00:44:25.091257 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-09 00:44:25.091268 | orchestrator | Monday 09 March 2026 00:44:22 +0000 (0:00:00.429) 0:00:12.877 ********** 2026-03-09 00:44:25.091279 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-09 00:44:25.091290 | orchestrator | 2026-03-09 00:44:25.091301 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-09 00:44:25.091312 | orchestrator | 2026-03-09 00:44:25.091323 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:44:25.091334 | orchestrator | Monday 09 March 2026 00:44:24 +0000 (0:00:02.104) 0:00:14.981 ********** 2026-03-09 00:44:25.091345 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-09 00:44:25.091356 | orchestrator | 2026-03-09 00:44:25.091367 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:44:25.091378 | orchestrator | Monday 09 March 2026 00:44:24 +0000 (0:00:00.349) 0:00:15.331 ********** 2026-03-09 00:44:25.091389 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:44:25.091400 | orchestrator | 2026-03-09 00:44:25.091418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:33.631637 | orchestrator | Monday 09 March 2026 00:44:25 +0000 (0:00:00.230) 0:00:15.561 ********** 2026-03-09 00:44:33.631767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-09 00:44:33.631785 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-09 00:44:33.631798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-09 00:44:33.631809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-09 00:44:33.631819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-09 00:44:33.631830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-09 00:44:33.631841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-09 00:44:33.631852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-09 00:44:33.631863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-09 00:44:33.631873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-09 00:44:33.631884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-09 00:44:33.631894 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-09 00:44:33.631911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-09 00:44:33.631923 | orchestrator | 2026-03-09 00:44:33.631935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:33.631946 | orchestrator | Monday 09 March 2026 00:44:25 +0000 (0:00:00.397) 0:00:15.959 ********** 2026-03-09 00:44:33.631958 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.631969 | orchestrator | 2026-03-09 00:44:33.631980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:33.631991 | orchestrator | Monday 09 March 2026 00:44:25 +0000 (0:00:00.214) 0:00:16.173 ********** 2026-03-09 00:44:33.632002 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.632013 | orchestrator | 2026-03-09 00:44:33.632024 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:33.632035 | orchestrator | Monday 09 March 2026 00:44:25 +0000 (0:00:00.202) 0:00:16.376 ********** 2026-03-09 00:44:33.632046 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.632057 | orchestrator | 2026-03-09 00:44:33.632068 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:33.632079 | orchestrator | Monday 09 March 2026 00:44:26 +0000 (0:00:00.224) 0:00:16.600 ********** 2026-03-09 00:44:33.632116 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.632128 | orchestrator | 2026-03-09 00:44:33.632139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:33.632150 | orchestrator | Monday 09 March 2026 00:44:26 +0000 (0:00:00.182) 0:00:16.783 ********** 2026-03-09 00:44:33.632161 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.632172 | orchestrator | 2026-03-09 00:44:33.632183 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:33.632194 | orchestrator | Monday 09 March 2026 00:44:27 +0000 (0:00:00.716) 0:00:17.499 ********** 2026-03-09 00:44:33.632205 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.632216 | orchestrator | 2026-03-09 00:44:33.632243 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:33.632255 | orchestrator | Monday 09 March 2026 00:44:27 +0000 (0:00:00.348) 0:00:17.848 ********** 2026-03-09 00:44:33.632266 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.632276 | orchestrator | 2026-03-09 00:44:33.632287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:33.632298 | orchestrator | Monday 09 March 2026 00:44:27 +0000 (0:00:00.216) 0:00:18.064 ********** 2026-03-09 00:44:33.632309 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.632320 | orchestrator | 2026-03-09 00:44:33.632330 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:33.632341 | orchestrator | Monday 09 March 2026 00:44:27 +0000 (0:00:00.217) 0:00:18.282 ********** 2026-03-09 00:44:33.632352 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127) 2026-03-09 00:44:33.632365 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127) 2026-03-09 00:44:33.632376 | orchestrator | 2026-03-09 00:44:33.632387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:33.632397 | orchestrator | Monday 09 March 2026 00:44:28 +0000 (0:00:00.425) 0:00:18.707 ********** 2026-03-09 00:44:33.632409 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fb37f328-fd68-494b-bcff-294494d86f6d) 2026-03-09 00:44:33.632419 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fb37f328-fd68-494b-bcff-294494d86f6d) 2026-03-09 00:44:33.632430 | orchestrator | 2026-03-09 00:44:33.632441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:33.632452 | orchestrator | Monday 09 March 2026 00:44:28 +0000 (0:00:00.458) 0:00:19.165 ********** 2026-03-09 00:44:33.632463 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_771f98cb-74e3-479e-8ec9-00fdc11a8238) 2026-03-09 00:44:33.632474 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_771f98cb-74e3-479e-8ec9-00fdc11a8238) 2026-03-09 00:44:33.632485 | orchestrator | 2026-03-09 00:44:33.632496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:33.632588 | orchestrator | Monday 09 March 2026 00:44:29 +0000 (0:00:00.447) 0:00:19.613 ********** 2026-03-09 00:44:33.632600 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_51b9e2da-28ed-40a7-8c18-598646420d16) 2026-03-09 00:44:33.632611 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_51b9e2da-28ed-40a7-8c18-598646420d16) 2026-03-09 00:44:33.632622 | orchestrator | 2026-03-09 00:44:33.632634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:33.632645 | orchestrator | Monday 09 March 2026 00:44:29 +0000 (0:00:00.455) 0:00:20.069 ********** 2026-03-09 00:44:33.632655 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:44:33.632666 | orchestrator | 2026-03-09 00:44:33.632677 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:33.632688 | orchestrator | Monday 09 March 2026 00:44:29 +0000 (0:00:00.376) 0:00:20.445 ********** 2026-03-09 00:44:33.632698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-09 00:44:33.632719 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-09 00:44:33.632729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-09 00:44:33.632740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-09 00:44:33.632751 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-09 00:44:33.632761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-09 00:44:33.632772 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-09 00:44:33.632783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-09 00:44:33.632794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-09 00:44:33.632804 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-09 00:44:33.632815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-09 00:44:33.632826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-09 00:44:33.632836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-09 00:44:33.632847 | orchestrator | 2026-03-09 00:44:33.632858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:33.632868 | orchestrator | Monday 09 March 2026 00:44:30 +0000 (0:00:00.395) 0:00:20.841 ********** 2026-03-09 00:44:33.632879 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.632890 | orchestrator | 2026-03-09 00:44:33.632901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:33.632917 | orchestrator | Monday 09 March 2026 00:44:31 +0000 (0:00:00.864) 0:00:21.706 ********** 2026-03-09 00:44:33.632928 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.632939 | orchestrator | 2026-03-09 00:44:33.632950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:33.632961 | orchestrator | Monday 09 March 2026 00:44:31 +0000 (0:00:00.234) 0:00:21.940 ********** 2026-03-09 00:44:33.632972 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.632982 | orchestrator | 2026-03-09 00:44:33.632993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:33.633004 | orchestrator | Monday 09 March 2026 00:44:31 +0000 (0:00:00.211) 0:00:22.152 ********** 2026-03-09 00:44:33.633015 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.633026 | orchestrator | 2026-03-09 00:44:33.633037 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:33.633048 | orchestrator | Monday 09 March 2026 00:44:31 +0000 (0:00:00.248) 0:00:22.401 ********** 2026-03-09 00:44:33.633059 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.633070 | orchestrator | 2026-03-09 00:44:33.633081 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:33.633090 | orchestrator | Monday 09 March 2026 00:44:32 +0000 (0:00:00.185) 0:00:22.586 ********** 2026-03-09 00:44:33.633100 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.633109 | orchestrator | 2026-03-09 00:44:33.633119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:33.633128 | orchestrator | Monday 09 March 2026 00:44:32 +0000 (0:00:00.195) 0:00:22.782 ********** 2026-03-09 00:44:33.633138 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.633148 | orchestrator | 2026-03-09 00:44:33.633157 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:33.633167 | orchestrator | Monday 09 March 2026 00:44:32 +0000 (0:00:00.189) 0:00:22.971 ********** 2026-03-09 00:44:33.633176 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:33.633193 | orchestrator | 2026-03-09 00:44:33.633203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:33.633213 | orchestrator | Monday 09 March 2026 00:44:32 +0000 (0:00:00.194) 0:00:23.166 ********** 2026-03-09 00:44:33.633222 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-09 00:44:33.633232 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-09 00:44:33.633242 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-09 00:44:33.633252 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-09 00:44:33.633261 | orchestrator | 2026-03-09 00:44:33.633271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:33.633280 | orchestrator | Monday 09 March 2026 00:44:33 +0000 (0:00:00.770) 0:00:23.936 ********** 2026-03-09 00:44:33.633290 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.673809 | orchestrator | 2026-03-09 00:44:39.673878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:39.673888 | orchestrator | Monday 09 March 2026 00:44:33 +0000 (0:00:00.170) 0:00:24.107 ********** 2026-03-09 00:44:39.673895 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.673903 | orchestrator | 2026-03-09 00:44:39.673910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:39.673917 | orchestrator | Monday 09 March 2026 00:44:33 +0000 (0:00:00.231) 0:00:24.339 ********** 2026-03-09 00:44:39.673924 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.673931 | orchestrator | 2026-03-09 00:44:39.673938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:39.673945 | orchestrator | Monday 09 March 2026 00:44:34 +0000 (0:00:00.183) 0:00:24.522 ********** 2026-03-09 00:44:39.673952 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.673959 | orchestrator | 2026-03-09 00:44:39.673966 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-09 00:44:39.673972 | orchestrator | Monday 09 March 2026 00:44:34 +0000 (0:00:00.543) 0:00:25.066 ********** 2026-03-09 00:44:39.673979 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-09 00:44:39.673986 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-09 00:44:39.673993 | orchestrator | 2026-03-09 00:44:39.674000 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-09 00:44:39.674007 | orchestrator | Monday 09 March 2026 00:44:34 +0000 (0:00:00.165) 0:00:25.232 ********** 2026-03-09 00:44:39.674048 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.674056 | orchestrator | 2026-03-09 00:44:39.674063 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-09 00:44:39.674071 | orchestrator | Monday 09 March 2026 00:44:34 +0000 (0:00:00.144) 0:00:25.376 ********** 2026-03-09 00:44:39.674077 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.674084 | orchestrator | 2026-03-09 00:44:39.674091 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-09 00:44:39.674098 | orchestrator | Monday 09 March 2026 00:44:35 +0000 (0:00:00.115) 0:00:25.491 ********** 2026-03-09 00:44:39.674105 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.674112 | orchestrator | 2026-03-09 00:44:39.674119 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-09 00:44:39.674126 | orchestrator | Monday 09 March 2026 00:44:35 +0000 (0:00:00.107) 0:00:25.599 ********** 2026-03-09 00:44:39.674132 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:44:39.674140 | orchestrator | 2026-03-09 00:44:39.674147 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-09 00:44:39.674154 | orchestrator | Monday 09 March 2026 00:44:35 +0000 (0:00:00.134) 0:00:25.733 ********** 2026-03-09 00:44:39.674161 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'}}) 2026-03-09 00:44:39.674168 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1060daf8-ac1b-51e4-8c2b-8176ae449cc2'}}) 2026-03-09 00:44:39.674189 | orchestrator | 2026-03-09 00:44:39.674197 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-09 00:44:39.674204 | orchestrator | Monday 09 March 2026 00:44:35 +0000 (0:00:00.137) 0:00:25.871 ********** 2026-03-09 00:44:39.674211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'}})  2026-03-09 00:44:39.674228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1060daf8-ac1b-51e4-8c2b-8176ae449cc2'}})  2026-03-09 00:44:39.674236 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.674242 | orchestrator | 2026-03-09 00:44:39.674249 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-09 00:44:39.674256 | orchestrator | Monday 09 March 2026 00:44:35 +0000 (0:00:00.116) 0:00:25.987 ********** 2026-03-09 00:44:39.674263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'}})  2026-03-09 00:44:39.674270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1060daf8-ac1b-51e4-8c2b-8176ae449cc2'}})  2026-03-09 00:44:39.674277 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.674283 | orchestrator | 2026-03-09 00:44:39.674290 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-09 00:44:39.674297 | orchestrator | Monday 09 March 2026 00:44:35 +0000 (0:00:00.130) 0:00:26.117 ********** 2026-03-09 00:44:39.674304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'}})  2026-03-09 00:44:39.674311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1060daf8-ac1b-51e4-8c2b-8176ae449cc2'}})  2026-03-09 00:44:39.674318 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.674325 | orchestrator | 2026-03-09 00:44:39.674332 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-09 00:44:39.674339 | orchestrator | Monday 09 March 2026 00:44:35 +0000 (0:00:00.172) 0:00:26.290 ********** 2026-03-09 00:44:39.674346 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:44:39.674352 | orchestrator | 2026-03-09 00:44:39.674359 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-09 00:44:39.674366 | orchestrator | Monday 09 March 2026 00:44:35 +0000 (0:00:00.134) 0:00:26.425 ********** 2026-03-09 00:44:39.674373 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:44:39.674380 | orchestrator | 2026-03-09 00:44:39.674389 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-09 00:44:39.674396 | orchestrator | Monday 09 March 2026 00:44:36 +0000 (0:00:00.159) 0:00:26.584 ********** 2026-03-09 00:44:39.674415 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.674423 | orchestrator | 2026-03-09 00:44:39.674431 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-09 00:44:39.674439 | orchestrator | Monday 09 March 2026 00:44:36 +0000 (0:00:00.285) 0:00:26.870 ********** 2026-03-09 00:44:39.674447 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.674454 | orchestrator | 2026-03-09 00:44:39.674462 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-09 00:44:39.674470 | orchestrator | Monday 09 March 2026 00:44:36 +0000 (0:00:00.139) 0:00:27.009 ********** 2026-03-09 00:44:39.674478 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.674485 | orchestrator | 2026-03-09 00:44:39.674493 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-09 00:44:39.674536 | orchestrator | Monday 09 March 2026 00:44:36 +0000 (0:00:00.150) 0:00:27.159 ********** 2026-03-09 00:44:39.674545 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:44:39.674553 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:44:39.674561 | orchestrator |  "sdb": { 2026-03-09 00:44:39.674570 | orchestrator |  "osd_lvm_uuid": "330a9702-ab5a-5bf7-9b95-ebb8b4c554e0" 2026-03-09 00:44:39.674578 | orchestrator |  }, 2026-03-09 00:44:39.674592 | orchestrator |  "sdc": { 2026-03-09 00:44:39.674599 | orchestrator |  "osd_lvm_uuid": "1060daf8-ac1b-51e4-8c2b-8176ae449cc2" 2026-03-09 00:44:39.674607 | orchestrator |  } 2026-03-09 00:44:39.674615 | orchestrator |  } 2026-03-09 00:44:39.674623 | orchestrator | } 2026-03-09 00:44:39.674631 | orchestrator | 2026-03-09 00:44:39.674638 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-09 00:44:39.674646 | orchestrator | Monday 09 March 2026 00:44:36 +0000 (0:00:00.162) 0:00:27.322 ********** 2026-03-09 00:44:39.674653 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.674661 | orchestrator | 2026-03-09 00:44:39.674669 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-09 00:44:39.674677 | orchestrator | Monday 09 March 2026 00:44:36 +0000 (0:00:00.141) 0:00:27.463 ********** 2026-03-09 00:44:39.674685 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.674692 | orchestrator | 2026-03-09 00:44:39.674700 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-09 00:44:39.674707 | orchestrator | Monday 09 March 2026 00:44:37 +0000 (0:00:00.135) 0:00:27.599 ********** 2026-03-09 00:44:39.674715 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:39.674723 | orchestrator | 2026-03-09 00:44:39.674730 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-09 00:44:39.674737 | orchestrator | Monday 09 March 2026 00:44:37 +0000 (0:00:00.162) 0:00:27.761 ********** 2026-03-09 00:44:39.674746 | orchestrator | changed: [testbed-node-4] => { 2026-03-09 00:44:39.674753 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-09 00:44:39.674761 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:44:39.674769 | orchestrator |  "sdb": { 2026-03-09 00:44:39.674777 | orchestrator |  "osd_lvm_uuid": "330a9702-ab5a-5bf7-9b95-ebb8b4c554e0" 2026-03-09 00:44:39.674784 | orchestrator |  }, 2026-03-09 00:44:39.674791 | orchestrator |  "sdc": { 2026-03-09 00:44:39.674797 | orchestrator |  "osd_lvm_uuid": "1060daf8-ac1b-51e4-8c2b-8176ae449cc2" 2026-03-09 00:44:39.674804 | orchestrator |  } 2026-03-09 00:44:39.674811 | orchestrator |  }, 2026-03-09 00:44:39.674818 | orchestrator |  "lvm_volumes": [ 2026-03-09 00:44:39.674824 | orchestrator |  { 2026-03-09 00:44:39.674831 | orchestrator |  "data": "osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0", 2026-03-09 00:44:39.674838 | orchestrator |  "data_vg": "ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0" 2026-03-09 00:44:39.674845 | orchestrator |  }, 2026-03-09 00:44:39.674851 | orchestrator |  { 2026-03-09 00:44:39.674858 | orchestrator |  "data": "osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2", 2026-03-09 00:44:39.674865 | orchestrator |  "data_vg": "ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2" 2026-03-09 00:44:39.674872 | orchestrator |  } 2026-03-09 00:44:39.674878 | orchestrator |  ] 2026-03-09 00:44:39.674885 | orchestrator |  } 2026-03-09 00:44:39.674892 | orchestrator | } 2026-03-09 00:44:39.674899 | orchestrator | 2026-03-09 00:44:39.674906 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-09 00:44:39.674912 | orchestrator | Monday 09 March 2026 00:44:37 +0000 (0:00:00.236) 0:00:27.998 ********** 2026-03-09 00:44:39.674919 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-09 00:44:39.674926 | orchestrator | 2026-03-09 00:44:39.674932 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-09 00:44:39.674939 | orchestrator | 2026-03-09 00:44:39.674946 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:44:39.674952 | orchestrator | Monday 09 March 2026 00:44:38 +0000 (0:00:01.081) 0:00:29.079 ********** 2026-03-09 00:44:39.674959 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-09 00:44:39.674966 | orchestrator | 2026-03-09 00:44:39.674973 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:44:39.674987 | orchestrator | Monday 09 March 2026 00:44:39 +0000 (0:00:00.535) 0:00:29.614 ********** 2026-03-09 00:44:39.674994 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:39.675001 | orchestrator | 2026-03-09 00:44:39.675008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:39.675015 | orchestrator | Monday 09 March 2026 00:44:39 +0000 (0:00:00.217) 0:00:29.831 ********** 2026-03-09 00:44:39.675021 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-09 00:44:39.675028 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-09 00:44:39.675035 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-09 00:44:39.675042 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-09 00:44:39.675048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-09 00:44:39.675060 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-09 00:44:47.918446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-09 00:44:47.918644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-09 00:44:47.918662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-09 00:44:47.918673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-09 00:44:47.918683 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-09 00:44:47.918693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-09 00:44:47.918703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-09 00:44:47.918713 | orchestrator | 2026-03-09 00:44:47.918725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:47.918735 | orchestrator | Monday 09 March 2026 00:44:39 +0000 (0:00:00.315) 0:00:30.147 ********** 2026-03-09 00:44:47.918745 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.918756 | orchestrator | 2026-03-09 00:44:47.918766 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:47.918775 | orchestrator | Monday 09 March 2026 00:44:39 +0000 (0:00:00.200) 0:00:30.347 ********** 2026-03-09 00:44:47.918785 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.918795 | orchestrator | 2026-03-09 00:44:47.918805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:47.918814 | orchestrator | Monday 09 March 2026 00:44:40 +0000 (0:00:00.192) 0:00:30.539 ********** 2026-03-09 00:44:47.918824 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.918834 | orchestrator | 2026-03-09 00:44:47.918844 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:47.918853 | orchestrator | Monday 09 March 2026 00:44:40 +0000 (0:00:00.192) 0:00:30.731 ********** 2026-03-09 00:44:47.918863 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.918873 | orchestrator | 2026-03-09 00:44:47.918882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:47.918892 | orchestrator | Monday 09 March 2026 00:44:40 +0000 (0:00:00.212) 0:00:30.944 ********** 2026-03-09 00:44:47.918902 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.918912 | orchestrator | 2026-03-09 00:44:47.918921 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:47.918931 | orchestrator | Monday 09 March 2026 00:44:40 +0000 (0:00:00.234) 0:00:31.178 ********** 2026-03-09 00:44:47.918940 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.918950 | orchestrator | 2026-03-09 00:44:47.918960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:47.918970 | orchestrator | Monday 09 March 2026 00:44:40 +0000 (0:00:00.203) 0:00:31.382 ********** 2026-03-09 00:44:47.919006 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.919017 | orchestrator | 2026-03-09 00:44:47.919029 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:47.919041 | orchestrator | Monday 09 March 2026 00:44:41 +0000 (0:00:00.202) 0:00:31.584 ********** 2026-03-09 00:44:47.919056 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.919073 | orchestrator | 2026-03-09 00:44:47.919098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:47.919115 | orchestrator | Monday 09 March 2026 00:44:41 +0000 (0:00:00.204) 0:00:31.789 ********** 2026-03-09 00:44:47.919130 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3) 2026-03-09 00:44:47.919148 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3) 2026-03-09 00:44:47.919164 | orchestrator | 2026-03-09 00:44:47.919179 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:47.919195 | orchestrator | Monday 09 March 2026 00:44:41 +0000 (0:00:00.614) 0:00:32.403 ********** 2026-03-09 00:44:47.919209 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bf4da7fe-59ae-42e8-92ff-fb55dbc42396) 2026-03-09 00:44:47.919223 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bf4da7fe-59ae-42e8-92ff-fb55dbc42396) 2026-03-09 00:44:47.919238 | orchestrator | 2026-03-09 00:44:47.919255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:47.919270 | orchestrator | Monday 09 March 2026 00:44:42 +0000 (0:00:00.473) 0:00:32.877 ********** 2026-03-09 00:44:47.919286 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d616dde6-c913-49b8-b8ef-90f7cc767ff0) 2026-03-09 00:44:47.919302 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d616dde6-c913-49b8-b8ef-90f7cc767ff0) 2026-03-09 00:44:47.919316 | orchestrator | 2026-03-09 00:44:47.919333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:47.919349 | orchestrator | Monday 09 March 2026 00:44:42 +0000 (0:00:00.444) 0:00:33.322 ********** 2026-03-09 00:44:47.919363 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7ad7d39e-c79f-49cf-9f83-32481f17a0bc) 2026-03-09 00:44:47.919380 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7ad7d39e-c79f-49cf-9f83-32481f17a0bc) 2026-03-09 00:44:47.919396 | orchestrator | 2026-03-09 00:44:47.919413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:47.919429 | orchestrator | Monday 09 March 2026 00:44:43 +0000 (0:00:00.464) 0:00:33.787 ********** 2026-03-09 00:44:47.919445 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:44:47.919463 | orchestrator | 2026-03-09 00:44:47.919479 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:47.919596 | orchestrator | Monday 09 March 2026 00:44:43 +0000 (0:00:00.393) 0:00:34.180 ********** 2026-03-09 00:44:47.919611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-09 00:44:47.919621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-09 00:44:47.919631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-09 00:44:47.919641 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-09 00:44:47.919651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-09 00:44:47.919677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-09 00:44:47.919688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-09 00:44:47.919698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-09 00:44:47.919720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-09 00:44:47.919730 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-09 00:44:47.919739 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-09 00:44:47.919749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-09 00:44:47.919759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-09 00:44:47.919768 | orchestrator | 2026-03-09 00:44:47.919779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:47.919788 | orchestrator | Monday 09 March 2026 00:44:44 +0000 (0:00:00.410) 0:00:34.591 ********** 2026-03-09 00:44:47.919798 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.919808 | orchestrator | 2026-03-09 00:44:47.919818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:47.919827 | orchestrator | Monday 09 March 2026 00:44:44 +0000 (0:00:00.214) 0:00:34.805 ********** 2026-03-09 00:44:47.919837 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.919847 | orchestrator | 2026-03-09 00:44:47.919856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:47.919866 | orchestrator | Monday 09 March 2026 00:44:44 +0000 (0:00:00.218) 0:00:35.024 ********** 2026-03-09 00:44:47.919880 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.919891 | orchestrator | 2026-03-09 00:44:47.919901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:47.919910 | orchestrator | Monday 09 March 2026 00:44:44 +0000 (0:00:00.214) 0:00:35.238 ********** 2026-03-09 00:44:47.919920 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.919930 | orchestrator | 2026-03-09 00:44:47.919939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:47.919949 | orchestrator | Monday 09 March 2026 00:44:44 +0000 (0:00:00.208) 0:00:35.447 ********** 2026-03-09 00:44:47.919958 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.919968 | orchestrator | 2026-03-09 00:44:47.919978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:47.919987 | orchestrator | Monday 09 March 2026 00:44:45 +0000 (0:00:00.198) 0:00:35.645 ********** 2026-03-09 00:44:47.919997 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.920007 | orchestrator | 2026-03-09 00:44:47.920017 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:47.920026 | orchestrator | Monday 09 March 2026 00:44:45 +0000 (0:00:00.694) 0:00:36.340 ********** 2026-03-09 00:44:47.920036 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.920046 | orchestrator | 2026-03-09 00:44:47.920055 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:47.920065 | orchestrator | Monday 09 March 2026 00:44:46 +0000 (0:00:00.214) 0:00:36.555 ********** 2026-03-09 00:44:47.920075 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.920084 | orchestrator | 2026-03-09 00:44:47.920094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:47.920103 | orchestrator | Monday 09 March 2026 00:44:46 +0000 (0:00:00.230) 0:00:36.786 ********** 2026-03-09 00:44:47.920113 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-09 00:44:47.920123 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-09 00:44:47.920133 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-09 00:44:47.920142 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-09 00:44:47.920152 | orchestrator | 2026-03-09 00:44:47.920162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:47.920171 | orchestrator | Monday 09 March 2026 00:44:46 +0000 (0:00:00.669) 0:00:37.455 ********** 2026-03-09 00:44:47.920181 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.920191 | orchestrator | 2026-03-09 00:44:47.920206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:47.920216 | orchestrator | Monday 09 March 2026 00:44:47 +0000 (0:00:00.222) 0:00:37.678 ********** 2026-03-09 00:44:47.920226 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.920235 | orchestrator | 2026-03-09 00:44:47.920245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:47.920255 | orchestrator | Monday 09 March 2026 00:44:47 +0000 (0:00:00.281) 0:00:37.959 ********** 2026-03-09 00:44:47.920265 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.920274 | orchestrator | 2026-03-09 00:44:47.920284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:47.920293 | orchestrator | Monday 09 March 2026 00:44:47 +0000 (0:00:00.204) 0:00:38.163 ********** 2026-03-09 00:44:47.920303 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:47.920313 | orchestrator | 2026-03-09 00:44:47.920329 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-09 00:44:52.987871 | orchestrator | Monday 09 March 2026 00:44:47 +0000 (0:00:00.223) 0:00:38.387 ********** 2026-03-09 00:44:52.987993 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-09 00:44:52.988016 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-09 00:44:52.988035 | orchestrator | 2026-03-09 00:44:52.988056 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-09 00:44:52.988072 | orchestrator | Monday 09 March 2026 00:44:48 +0000 (0:00:00.238) 0:00:38.625 ********** 2026-03-09 00:44:52.988089 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:52.988106 | orchestrator | 2026-03-09 00:44:52.988119 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-09 00:44:52.988129 | orchestrator | Monday 09 March 2026 00:44:48 +0000 (0:00:00.159) 0:00:38.784 ********** 2026-03-09 00:44:52.988144 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:52.988156 | orchestrator | 2026-03-09 00:44:52.988166 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-09 00:44:52.988176 | orchestrator | Monday 09 March 2026 00:44:48 +0000 (0:00:00.129) 0:00:38.914 ********** 2026-03-09 00:44:52.988189 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:52.988205 | orchestrator | 2026-03-09 00:44:52.988221 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-09 00:44:52.988238 | orchestrator | Monday 09 March 2026 00:44:48 +0000 (0:00:00.371) 0:00:39.286 ********** 2026-03-09 00:44:52.988254 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:52.988272 | orchestrator | 2026-03-09 00:44:52.988289 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-09 00:44:52.988302 | orchestrator | Monday 09 March 2026 00:44:48 +0000 (0:00:00.138) 0:00:39.424 ********** 2026-03-09 00:44:52.988312 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'}}) 2026-03-09 00:44:52.988323 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bfced398-94c6-51d2-a38a-d9d8acf734fd'}}) 2026-03-09 00:44:52.988332 | orchestrator | 2026-03-09 00:44:52.988342 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-09 00:44:52.988352 | orchestrator | Monday 09 March 2026 00:44:49 +0000 (0:00:00.168) 0:00:39.592 ********** 2026-03-09 00:44:52.988363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'}})  2026-03-09 00:44:52.988375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bfced398-94c6-51d2-a38a-d9d8acf734fd'}})  2026-03-09 00:44:52.988389 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:52.988408 | orchestrator | 2026-03-09 00:44:52.988426 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-09 00:44:52.988442 | orchestrator | Monday 09 March 2026 00:44:49 +0000 (0:00:00.184) 0:00:39.777 ********** 2026-03-09 00:44:52.988460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'}})  2026-03-09 00:44:52.988543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bfced398-94c6-51d2-a38a-d9d8acf734fd'}})  2026-03-09 00:44:52.988566 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:52.988585 | orchestrator | 2026-03-09 00:44:52.988607 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-09 00:44:52.988628 | orchestrator | Monday 09 March 2026 00:44:49 +0000 (0:00:00.190) 0:00:39.968 ********** 2026-03-09 00:44:52.988671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'}})  2026-03-09 00:44:52.988688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bfced398-94c6-51d2-a38a-d9d8acf734fd'}})  2026-03-09 00:44:52.988707 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:52.988725 | orchestrator | 2026-03-09 00:44:52.988743 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-09 00:44:52.988761 | orchestrator | Monday 09 March 2026 00:44:49 +0000 (0:00:00.182) 0:00:40.151 ********** 2026-03-09 00:44:52.988779 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:52.988796 | orchestrator | 2026-03-09 00:44:52.988814 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-09 00:44:52.988831 | orchestrator | Monday 09 March 2026 00:44:49 +0000 (0:00:00.187) 0:00:40.339 ********** 2026-03-09 00:44:52.988848 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:52.988865 | orchestrator | 2026-03-09 00:44:52.988883 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-09 00:44:52.988900 | orchestrator | Monday 09 March 2026 00:44:50 +0000 (0:00:00.238) 0:00:40.577 ********** 2026-03-09 00:44:52.988918 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:52.988936 | orchestrator | 2026-03-09 00:44:52.988955 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-09 00:44:52.988973 | orchestrator | Monday 09 March 2026 00:44:50 +0000 (0:00:00.148) 0:00:40.725 ********** 2026-03-09 00:44:52.988992 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:52.989010 | orchestrator | 2026-03-09 00:44:52.989027 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-09 00:44:52.989045 | orchestrator | Monday 09 March 2026 00:44:50 +0000 (0:00:00.186) 0:00:40.912 ********** 2026-03-09 00:44:52.989062 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:52.989080 | orchestrator | 2026-03-09 00:44:52.989098 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-09 00:44:52.989117 | orchestrator | Monday 09 March 2026 00:44:50 +0000 (0:00:00.132) 0:00:41.045 ********** 2026-03-09 00:44:52.989136 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:44:52.989154 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:44:52.989174 | orchestrator |  "sdb": { 2026-03-09 00:44:52.989219 | orchestrator |  "osd_lvm_uuid": "2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd" 2026-03-09 00:44:52.989239 | orchestrator |  }, 2026-03-09 00:44:52.989257 | orchestrator |  "sdc": { 2026-03-09 00:44:52.989276 | orchestrator |  "osd_lvm_uuid": "bfced398-94c6-51d2-a38a-d9d8acf734fd" 2026-03-09 00:44:52.989293 | orchestrator |  } 2026-03-09 00:44:52.989310 | orchestrator |  } 2026-03-09 00:44:52.989329 | orchestrator | } 2026-03-09 00:44:52.989347 | orchestrator | 2026-03-09 00:44:52.989364 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-09 00:44:52.989382 | orchestrator | Monday 09 March 2026 00:44:50 +0000 (0:00:00.213) 0:00:41.258 ********** 2026-03-09 00:44:52.989399 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:52.989416 | orchestrator | 2026-03-09 00:44:52.989433 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-09 00:44:52.989450 | orchestrator | Monday 09 March 2026 00:44:51 +0000 (0:00:00.405) 0:00:41.663 ********** 2026-03-09 00:44:52.989468 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:52.989576 | orchestrator | 2026-03-09 00:44:52.989595 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-09 00:44:52.989611 | orchestrator | Monday 09 March 2026 00:44:51 +0000 (0:00:00.151) 0:00:41.815 ********** 2026-03-09 00:44:52.989626 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:52.989644 | orchestrator | 2026-03-09 00:44:52.989660 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-09 00:44:52.989676 | orchestrator | Monday 09 March 2026 00:44:51 +0000 (0:00:00.156) 0:00:41.971 ********** 2026-03-09 00:44:52.989690 | orchestrator | changed: [testbed-node-5] => { 2026-03-09 00:44:52.989700 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-09 00:44:52.989709 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:44:52.989719 | orchestrator |  "sdb": { 2026-03-09 00:44:52.989729 | orchestrator |  "osd_lvm_uuid": "2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd" 2026-03-09 00:44:52.989739 | orchestrator |  }, 2026-03-09 00:44:52.989749 | orchestrator |  "sdc": { 2026-03-09 00:44:52.989759 | orchestrator |  "osd_lvm_uuid": "bfced398-94c6-51d2-a38a-d9d8acf734fd" 2026-03-09 00:44:52.989768 | orchestrator |  } 2026-03-09 00:44:52.989778 | orchestrator |  }, 2026-03-09 00:44:52.989788 | orchestrator |  "lvm_volumes": [ 2026-03-09 00:44:52.989797 | orchestrator |  { 2026-03-09 00:44:52.989807 | orchestrator |  "data": "osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd", 2026-03-09 00:44:52.989818 | orchestrator |  "data_vg": "ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd" 2026-03-09 00:44:52.989827 | orchestrator |  }, 2026-03-09 00:44:52.989837 | orchestrator |  { 2026-03-09 00:44:52.989847 | orchestrator |  "data": "osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd", 2026-03-09 00:44:52.989857 | orchestrator |  "data_vg": "ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd" 2026-03-09 00:44:52.989867 | orchestrator |  } 2026-03-09 00:44:52.989877 | orchestrator |  ] 2026-03-09 00:44:52.989891 | orchestrator |  } 2026-03-09 00:44:52.989901 | orchestrator | } 2026-03-09 00:44:52.989912 | orchestrator | 2026-03-09 00:44:52.989929 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-09 00:44:52.989945 | orchestrator | Monday 09 March 2026 00:44:51 +0000 (0:00:00.234) 0:00:42.205 ********** 2026-03-09 00:44:52.989962 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-09 00:44:52.989978 | orchestrator | 2026-03-09 00:44:52.989994 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:44:52.990011 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 00:44:52.990102 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 00:44:52.990117 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 00:44:52.990133 | orchestrator | 2026-03-09 00:44:52.990154 | orchestrator | 2026-03-09 00:44:52.990176 | orchestrator | 2026-03-09 00:44:52.990192 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:44:52.990208 | orchestrator | Monday 09 March 2026 00:44:52 +0000 (0:00:01.236) 0:00:43.442 ********** 2026-03-09 00:44:52.990224 | orchestrator | =============================================================================== 2026-03-09 00:44:52.990240 | orchestrator | Write configuration file ------------------------------------------------ 4.42s 2026-03-09 00:44:52.990258 | orchestrator | Add known links to the list of available block devices ------------------ 1.21s 2026-03-09 00:44:52.990276 | orchestrator | Add known partitions to the list of available block devices ------------- 1.18s 2026-03-09 00:44:52.990295 | orchestrator | Add known partitions to the list of available block devices ------------- 1.16s 2026-03-09 00:44:52.990331 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.13s 2026-03-09 00:44:52.990348 | orchestrator | Print configuration data ------------------------------------------------ 0.90s 2026-03-09 00:44:52.990364 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2026-03-09 00:44:52.990380 | orchestrator | Print WAL devices ------------------------------------------------------- 0.78s 2026-03-09 00:44:52.990396 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-03-09 00:44:52.990410 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-03-09 00:44:52.990419 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.72s 2026-03-09 00:44:52.990429 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-03-09 00:44:52.990439 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-03-09 00:44:52.990464 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2026-03-09 00:44:53.526767 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-03-09 00:44:53.526860 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-03-09 00:44:53.526869 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.61s 2026-03-09 00:44:53.526876 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.58s 2026-03-09 00:44:53.526883 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-03-09 00:44:53.526889 | orchestrator | Set DB devices config data ---------------------------------------------- 0.57s 2026-03-09 00:45:16.325059 | orchestrator | 2026-03-09 00:45:16 | INFO  | Task d2a45975-e178-46c4-9598-6eb4299b343f (sync inventory) is running in background. Output coming soon. 2026-03-09 00:45:46.143618 | orchestrator | 2026-03-09 00:45:17 | INFO  | Starting group_vars file reorganization 2026-03-09 00:45:46.143695 | orchestrator | 2026-03-09 00:45:17 | INFO  | Moved 0 file(s) to their respective directories 2026-03-09 00:45:46.143702 | orchestrator | 2026-03-09 00:45:17 | INFO  | Group_vars file reorganization completed 2026-03-09 00:45:46.143707 | orchestrator | 2026-03-09 00:45:20 | INFO  | Starting variable preparation from inventory 2026-03-09 00:45:46.143712 | orchestrator | 2026-03-09 00:45:22 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-09 00:45:46.143716 | orchestrator | 2026-03-09 00:45:22 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-09 00:45:46.143736 | orchestrator | 2026-03-09 00:45:22 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-09 00:45:46.143740 | orchestrator | 2026-03-09 00:45:22 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-09 00:45:46.143745 | orchestrator | 2026-03-09 00:45:22 | INFO  | Variable preparation completed 2026-03-09 00:45:46.143749 | orchestrator | 2026-03-09 00:45:24 | INFO  | Starting inventory overwrite handling 2026-03-09 00:45:46.143753 | orchestrator | 2026-03-09 00:45:24 | INFO  | Handling group overwrites in 99-overwrite 2026-03-09 00:45:46.143760 | orchestrator | 2026-03-09 00:45:24 | INFO  | Removing group frr:children from 60-generic 2026-03-09 00:45:46.143764 | orchestrator | 2026-03-09 00:45:24 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-09 00:45:46.143769 | orchestrator | 2026-03-09 00:45:24 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-09 00:45:46.143773 | orchestrator | 2026-03-09 00:45:24 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-09 00:45:46.143777 | orchestrator | 2026-03-09 00:45:24 | INFO  | Handling group overwrites in 20-roles 2026-03-09 00:45:46.143781 | orchestrator | 2026-03-09 00:45:24 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-09 00:45:46.143799 | orchestrator | 2026-03-09 00:45:24 | INFO  | Removed 5 group(s) in total 2026-03-09 00:45:46.143803 | orchestrator | 2026-03-09 00:45:24 | INFO  | Inventory overwrite handling completed 2026-03-09 00:45:46.143807 | orchestrator | 2026-03-09 00:45:25 | INFO  | Starting merge of inventory files 2026-03-09 00:45:46.143811 | orchestrator | 2026-03-09 00:45:25 | INFO  | Inventory files merged successfully 2026-03-09 00:45:46.143815 | orchestrator | 2026-03-09 00:45:31 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-09 00:45:46.143819 | orchestrator | 2026-03-09 00:45:44 | INFO  | Successfully wrote ClusterShell configuration 2026-03-09 00:45:46.143823 | orchestrator | [master 1a2c3b4] 2026-03-09-00-45 2026-03-09 00:45:46.143828 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-09 00:45:48.680954 | orchestrator | 2026-03-09 00:45:48 | INFO  | Task 6bdaf84c-e9b1-4634-97e8-827dea26fbb0 (ceph-create-lvm-devices) was prepared for execution. 2026-03-09 00:45:48.681054 | orchestrator | 2026-03-09 00:45:48 | INFO  | It takes a moment until task 6bdaf84c-e9b1-4634-97e8-827dea26fbb0 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-09 00:46:02.139869 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-09 00:46:02.139976 | orchestrator | 2.16.14 2026-03-09 00:46:02.139995 | orchestrator | 2026-03-09 00:46:02.140010 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-09 00:46:02.140025 | orchestrator | 2026-03-09 00:46:02.140039 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:46:02.140053 | orchestrator | Monday 09 March 2026 00:45:53 +0000 (0:00:00.356) 0:00:00.356 ********** 2026-03-09 00:46:02.140069 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-09 00:46:02.140081 | orchestrator | 2026-03-09 00:46:02.140092 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:46:02.140104 | orchestrator | Monday 09 March 2026 00:45:54 +0000 (0:00:00.260) 0:00:00.617 ********** 2026-03-09 00:46:02.140117 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:46:02.140129 | orchestrator | 2026-03-09 00:46:02.140142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:02.140155 | orchestrator | Monday 09 March 2026 00:45:54 +0000 (0:00:00.232) 0:00:00.849 ********** 2026-03-09 00:46:02.140168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-09 00:46:02.140180 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-09 00:46:02.140193 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-09 00:46:02.140205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-09 00:46:02.140217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-09 00:46:02.140230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-09 00:46:02.140242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-09 00:46:02.140254 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-09 00:46:02.140267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-09 00:46:02.140280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-09 00:46:02.140293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-09 00:46:02.140306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-09 00:46:02.140319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-09 00:46:02.140364 | orchestrator | 2026-03-09 00:46:02.140378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:02.140391 | orchestrator | Monday 09 March 2026 00:45:54 +0000 (0:00:00.589) 0:00:01.439 ********** 2026-03-09 00:46:02.140405 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:02.140417 | orchestrator | 2026-03-09 00:46:02.140431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:02.140445 | orchestrator | Monday 09 March 2026 00:45:55 +0000 (0:00:00.213) 0:00:01.653 ********** 2026-03-09 00:46:02.140458 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:02.140472 | orchestrator | 2026-03-09 00:46:02.140514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:02.140529 | orchestrator | Monday 09 March 2026 00:45:55 +0000 (0:00:00.266) 0:00:01.919 ********** 2026-03-09 00:46:02.140542 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:02.140555 | orchestrator | 2026-03-09 00:46:02.140568 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:02.140581 | orchestrator | Monday 09 March 2026 00:45:55 +0000 (0:00:00.219) 0:00:02.138 ********** 2026-03-09 00:46:02.140594 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:02.140607 | orchestrator | 2026-03-09 00:46:02.140621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:02.140635 | orchestrator | Monday 09 March 2026 00:45:55 +0000 (0:00:00.218) 0:00:02.356 ********** 2026-03-09 00:46:02.140648 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:02.140661 | orchestrator | 2026-03-09 00:46:02.140674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:02.140688 | orchestrator | Monday 09 March 2026 00:45:56 +0000 (0:00:00.208) 0:00:02.564 ********** 2026-03-09 00:46:02.140701 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:02.140714 | orchestrator | 2026-03-09 00:46:02.140728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:02.140741 | orchestrator | Monday 09 March 2026 00:45:56 +0000 (0:00:00.208) 0:00:02.773 ********** 2026-03-09 00:46:02.140754 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:02.140767 | orchestrator | 2026-03-09 00:46:02.140779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:02.140792 | orchestrator | Monday 09 March 2026 00:45:56 +0000 (0:00:00.220) 0:00:02.993 ********** 2026-03-09 00:46:02.140799 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:02.140807 | orchestrator | 2026-03-09 00:46:02.140818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:02.140832 | orchestrator | Monday 09 March 2026 00:45:56 +0000 (0:00:00.251) 0:00:03.244 ********** 2026-03-09 00:46:02.140842 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3) 2026-03-09 00:46:02.140851 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3) 2026-03-09 00:46:02.140859 | orchestrator | 2026-03-09 00:46:02.140866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:02.140892 | orchestrator | Monday 09 March 2026 00:45:57 +0000 (0:00:00.497) 0:00:03.742 ********** 2026-03-09 00:46:02.140901 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_741bb6ef-88fa-4baa-bfac-ed82f0dadf29) 2026-03-09 00:46:02.140908 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_741bb6ef-88fa-4baa-bfac-ed82f0dadf29) 2026-03-09 00:46:02.140916 | orchestrator | 2026-03-09 00:46:02.140923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:02.140930 | orchestrator | Monday 09 March 2026 00:45:57 +0000 (0:00:00.747) 0:00:04.489 ********** 2026-03-09 00:46:02.140938 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_320449d2-61ff-46fc-8f0d-ef8de6be542f) 2026-03-09 00:46:02.140945 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_320449d2-61ff-46fc-8f0d-ef8de6be542f) 2026-03-09 00:46:02.140963 | orchestrator | 2026-03-09 00:46:02.140970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:02.140978 | orchestrator | Monday 09 March 2026 00:45:58 +0000 (0:00:00.717) 0:00:05.207 ********** 2026-03-09 00:46:02.140985 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_17d99fae-d184-430d-aac6-01476d40e112) 2026-03-09 00:46:02.140992 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_17d99fae-d184-430d-aac6-01476d40e112) 2026-03-09 00:46:02.141000 | orchestrator | 2026-03-09 00:46:02.141007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:02.141014 | orchestrator | Monday 09 March 2026 00:45:59 +0000 (0:00:00.990) 0:00:06.197 ********** 2026-03-09 00:46:02.141021 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:46:02.141029 | orchestrator | 2026-03-09 00:46:02.141036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:02.141043 | orchestrator | Monday 09 March 2026 00:45:59 +0000 (0:00:00.347) 0:00:06.545 ********** 2026-03-09 00:46:02.141051 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-09 00:46:02.141058 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-09 00:46:02.141065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-09 00:46:02.141094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-09 00:46:02.141107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-09 00:46:02.141118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-09 00:46:02.141132 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-09 00:46:02.141145 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-09 00:46:02.141159 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-09 00:46:02.141178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-09 00:46:02.141191 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-09 00:46:02.141209 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-09 00:46:02.141221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-09 00:46:02.141235 | orchestrator | 2026-03-09 00:46:02.141249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:02.141261 | orchestrator | Monday 09 March 2026 00:46:00 +0000 (0:00:00.593) 0:00:07.138 ********** 2026-03-09 00:46:02.141275 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:02.141288 | orchestrator | 2026-03-09 00:46:02.141301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:02.141313 | orchestrator | Monday 09 March 2026 00:46:00 +0000 (0:00:00.244) 0:00:07.382 ********** 2026-03-09 00:46:02.141325 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:02.141336 | orchestrator | 2026-03-09 00:46:02.141348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:02.141361 | orchestrator | Monday 09 March 2026 00:46:01 +0000 (0:00:00.212) 0:00:07.595 ********** 2026-03-09 00:46:02.141369 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:02.141376 | orchestrator | 2026-03-09 00:46:02.141384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:02.141391 | orchestrator | Monday 09 March 2026 00:46:01 +0000 (0:00:00.232) 0:00:07.828 ********** 2026-03-09 00:46:02.141398 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:02.141413 | orchestrator | 2026-03-09 00:46:02.141421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:02.141428 | orchestrator | Monday 09 March 2026 00:46:01 +0000 (0:00:00.216) 0:00:08.045 ********** 2026-03-09 00:46:02.141435 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:02.141443 | orchestrator | 2026-03-09 00:46:02.141450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:02.141457 | orchestrator | Monday 09 March 2026 00:46:01 +0000 (0:00:00.209) 0:00:08.254 ********** 2026-03-09 00:46:02.141465 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:02.141475 | orchestrator | 2026-03-09 00:46:02.141522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:02.141535 | orchestrator | Monday 09 March 2026 00:46:01 +0000 (0:00:00.217) 0:00:08.472 ********** 2026-03-09 00:46:02.141546 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:02.141554 | orchestrator | 2026-03-09 00:46:02.141569 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:10.813202 | orchestrator | Monday 09 March 2026 00:46:02 +0000 (0:00:00.216) 0:00:08.688 ********** 2026-03-09 00:46:10.813298 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.813310 | orchestrator | 2026-03-09 00:46:10.813320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:10.813329 | orchestrator | Monday 09 March 2026 00:46:02 +0000 (0:00:00.261) 0:00:08.950 ********** 2026-03-09 00:46:10.813337 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-09 00:46:10.813346 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-09 00:46:10.813354 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-09 00:46:10.813362 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-09 00:46:10.813370 | orchestrator | 2026-03-09 00:46:10.813378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:10.813386 | orchestrator | Monday 09 March 2026 00:46:03 +0000 (0:00:01.441) 0:00:10.392 ********** 2026-03-09 00:46:10.813394 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.813402 | orchestrator | 2026-03-09 00:46:10.813410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:10.813418 | orchestrator | Monday 09 March 2026 00:46:04 +0000 (0:00:00.271) 0:00:10.664 ********** 2026-03-09 00:46:10.813426 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.813434 | orchestrator | 2026-03-09 00:46:10.813441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:10.813449 | orchestrator | Monday 09 March 2026 00:46:04 +0000 (0:00:00.288) 0:00:10.952 ********** 2026-03-09 00:46:10.813458 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.813466 | orchestrator | 2026-03-09 00:46:10.813474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:10.813482 | orchestrator | Monday 09 March 2026 00:46:04 +0000 (0:00:00.195) 0:00:11.148 ********** 2026-03-09 00:46:10.813514 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.813522 | orchestrator | 2026-03-09 00:46:10.813530 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-09 00:46:10.813537 | orchestrator | Monday 09 March 2026 00:46:04 +0000 (0:00:00.219) 0:00:11.368 ********** 2026-03-09 00:46:10.813545 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.813553 | orchestrator | 2026-03-09 00:46:10.813561 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-09 00:46:10.813568 | orchestrator | Monday 09 March 2026 00:46:04 +0000 (0:00:00.139) 0:00:11.507 ********** 2026-03-09 00:46:10.813576 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a76ca51e-4549-54be-bcb5-a2c49bca5f85'}}) 2026-03-09 00:46:10.813585 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '30c2fd4e-0770-5a21-8e5f-9ea8386abee3'}}) 2026-03-09 00:46:10.813593 | orchestrator | 2026-03-09 00:46:10.813600 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-09 00:46:10.813632 | orchestrator | Monday 09 March 2026 00:46:05 +0000 (0:00:00.211) 0:00:11.718 ********** 2026-03-09 00:46:10.813642 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'}) 2026-03-09 00:46:10.813651 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'}) 2026-03-09 00:46:10.813659 | orchestrator | 2026-03-09 00:46:10.813667 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-09 00:46:10.813675 | orchestrator | Monday 09 March 2026 00:46:07 +0000 (0:00:01.941) 0:00:13.660 ********** 2026-03-09 00:46:10.813683 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:10.813692 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:10.813699 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.813707 | orchestrator | 2026-03-09 00:46:10.813715 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-09 00:46:10.813723 | orchestrator | Monday 09 March 2026 00:46:07 +0000 (0:00:00.168) 0:00:13.828 ********** 2026-03-09 00:46:10.813731 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'}) 2026-03-09 00:46:10.813738 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'}) 2026-03-09 00:46:10.813748 | orchestrator | 2026-03-09 00:46:10.813757 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-09 00:46:10.813766 | orchestrator | Monday 09 March 2026 00:46:08 +0000 (0:00:01.425) 0:00:15.254 ********** 2026-03-09 00:46:10.813775 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:10.813784 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:10.813793 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.813803 | orchestrator | 2026-03-09 00:46:10.813812 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-09 00:46:10.813822 | orchestrator | Monday 09 March 2026 00:46:08 +0000 (0:00:00.160) 0:00:15.414 ********** 2026-03-09 00:46:10.813844 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.813854 | orchestrator | 2026-03-09 00:46:10.813863 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-09 00:46:10.813873 | orchestrator | Monday 09 March 2026 00:46:08 +0000 (0:00:00.134) 0:00:15.549 ********** 2026-03-09 00:46:10.813882 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:10.813892 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:10.813902 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.813911 | orchestrator | 2026-03-09 00:46:10.813920 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-09 00:46:10.813929 | orchestrator | Monday 09 March 2026 00:46:09 +0000 (0:00:00.401) 0:00:15.950 ********** 2026-03-09 00:46:10.813938 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.813947 | orchestrator | 2026-03-09 00:46:10.813957 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-09 00:46:10.813966 | orchestrator | Monday 09 March 2026 00:46:09 +0000 (0:00:00.136) 0:00:16.087 ********** 2026-03-09 00:46:10.813981 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:10.813991 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:10.814000 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.814010 | orchestrator | 2026-03-09 00:46:10.814053 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-09 00:46:10.814061 | orchestrator | Monday 09 March 2026 00:46:09 +0000 (0:00:00.164) 0:00:16.251 ********** 2026-03-09 00:46:10.814069 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.814077 | orchestrator | 2026-03-09 00:46:10.814085 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-09 00:46:10.814093 | orchestrator | Monday 09 March 2026 00:46:09 +0000 (0:00:00.140) 0:00:16.392 ********** 2026-03-09 00:46:10.814101 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:10.814109 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:10.814117 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.814125 | orchestrator | 2026-03-09 00:46:10.814132 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-09 00:46:10.814140 | orchestrator | Monday 09 March 2026 00:46:09 +0000 (0:00:00.154) 0:00:16.546 ********** 2026-03-09 00:46:10.814148 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:46:10.814156 | orchestrator | 2026-03-09 00:46:10.814164 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-09 00:46:10.814186 | orchestrator | Monday 09 March 2026 00:46:10 +0000 (0:00:00.149) 0:00:16.696 ********** 2026-03-09 00:46:10.814199 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:10.814207 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:10.814215 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.814223 | orchestrator | 2026-03-09 00:46:10.814231 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-09 00:46:10.814239 | orchestrator | Monday 09 March 2026 00:46:10 +0000 (0:00:00.182) 0:00:16.878 ********** 2026-03-09 00:46:10.814247 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:10.814255 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:10.814263 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.814271 | orchestrator | 2026-03-09 00:46:10.814279 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-09 00:46:10.814287 | orchestrator | Monday 09 March 2026 00:46:10 +0000 (0:00:00.179) 0:00:17.058 ********** 2026-03-09 00:46:10.814295 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:10.814303 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:10.814311 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.814318 | orchestrator | 2026-03-09 00:46:10.814327 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-09 00:46:10.814334 | orchestrator | Monday 09 March 2026 00:46:10 +0000 (0:00:00.161) 0:00:17.220 ********** 2026-03-09 00:46:10.814348 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:10.814356 | orchestrator | 2026-03-09 00:46:10.814364 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-09 00:46:10.814377 | orchestrator | Monday 09 March 2026 00:46:10 +0000 (0:00:00.142) 0:00:17.362 ********** 2026-03-09 00:46:17.703565 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.703684 | orchestrator | 2026-03-09 00:46:17.703701 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-09 00:46:17.703716 | orchestrator | Monday 09 March 2026 00:46:10 +0000 (0:00:00.158) 0:00:17.521 ********** 2026-03-09 00:46:17.703728 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.703739 | orchestrator | 2026-03-09 00:46:17.703751 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-09 00:46:17.703762 | orchestrator | Monday 09 March 2026 00:46:11 +0000 (0:00:00.182) 0:00:17.704 ********** 2026-03-09 00:46:17.703773 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:46:17.703785 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-09 00:46:17.703797 | orchestrator | } 2026-03-09 00:46:17.703809 | orchestrator | 2026-03-09 00:46:17.703820 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-09 00:46:17.703831 | orchestrator | Monday 09 March 2026 00:46:11 +0000 (0:00:00.401) 0:00:18.106 ********** 2026-03-09 00:46:17.703842 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:46:17.703853 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-09 00:46:17.703864 | orchestrator | } 2026-03-09 00:46:17.703876 | orchestrator | 2026-03-09 00:46:17.703887 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-09 00:46:17.703898 | orchestrator | Monday 09 March 2026 00:46:11 +0000 (0:00:00.152) 0:00:18.258 ********** 2026-03-09 00:46:17.703909 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:46:17.703921 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-09 00:46:17.703932 | orchestrator | } 2026-03-09 00:46:17.703944 | orchestrator | 2026-03-09 00:46:17.703955 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-09 00:46:17.703966 | orchestrator | Monday 09 March 2026 00:46:11 +0000 (0:00:00.159) 0:00:18.418 ********** 2026-03-09 00:46:17.703977 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:46:17.703988 | orchestrator | 2026-03-09 00:46:17.703999 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-09 00:46:17.704012 | orchestrator | Monday 09 March 2026 00:46:12 +0000 (0:00:00.639) 0:00:19.058 ********** 2026-03-09 00:46:17.704025 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:46:17.704038 | orchestrator | 2026-03-09 00:46:17.704051 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-09 00:46:17.704064 | orchestrator | Monday 09 March 2026 00:46:13 +0000 (0:00:00.548) 0:00:19.607 ********** 2026-03-09 00:46:17.704078 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:46:17.704091 | orchestrator | 2026-03-09 00:46:17.704104 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-09 00:46:17.704118 | orchestrator | Monday 09 March 2026 00:46:13 +0000 (0:00:00.492) 0:00:20.099 ********** 2026-03-09 00:46:17.704131 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:46:17.704145 | orchestrator | 2026-03-09 00:46:17.704156 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-09 00:46:17.704168 | orchestrator | Monday 09 March 2026 00:46:13 +0000 (0:00:00.166) 0:00:20.266 ********** 2026-03-09 00:46:17.704179 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.704190 | orchestrator | 2026-03-09 00:46:17.704201 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-09 00:46:17.704212 | orchestrator | Monday 09 March 2026 00:46:13 +0000 (0:00:00.152) 0:00:20.418 ********** 2026-03-09 00:46:17.704224 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.704234 | orchestrator | 2026-03-09 00:46:17.704246 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-09 00:46:17.704284 | orchestrator | Monday 09 March 2026 00:46:13 +0000 (0:00:00.130) 0:00:20.549 ********** 2026-03-09 00:46:17.704311 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:46:17.704323 | orchestrator |  "vgs_report": { 2026-03-09 00:46:17.704335 | orchestrator |  "vg": [] 2026-03-09 00:46:17.704347 | orchestrator |  } 2026-03-09 00:46:17.704358 | orchestrator | } 2026-03-09 00:46:17.704369 | orchestrator | 2026-03-09 00:46:17.704380 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-09 00:46:17.704391 | orchestrator | Monday 09 March 2026 00:46:14 +0000 (0:00:00.155) 0:00:20.704 ********** 2026-03-09 00:46:17.704402 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.704413 | orchestrator | 2026-03-09 00:46:17.704424 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-09 00:46:17.704435 | orchestrator | Monday 09 March 2026 00:46:14 +0000 (0:00:00.152) 0:00:20.857 ********** 2026-03-09 00:46:17.704446 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.704457 | orchestrator | 2026-03-09 00:46:17.704468 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-09 00:46:17.704479 | orchestrator | Monday 09 March 2026 00:46:14 +0000 (0:00:00.149) 0:00:21.006 ********** 2026-03-09 00:46:17.704568 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.704580 | orchestrator | 2026-03-09 00:46:17.704591 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-09 00:46:17.704602 | orchestrator | Monday 09 March 2026 00:46:14 +0000 (0:00:00.361) 0:00:21.368 ********** 2026-03-09 00:46:17.704613 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.704624 | orchestrator | 2026-03-09 00:46:17.704635 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-09 00:46:17.704646 | orchestrator | Monday 09 March 2026 00:46:14 +0000 (0:00:00.156) 0:00:21.525 ********** 2026-03-09 00:46:17.704657 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.704668 | orchestrator | 2026-03-09 00:46:17.704679 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-09 00:46:17.704690 | orchestrator | Monday 09 March 2026 00:46:15 +0000 (0:00:00.145) 0:00:21.671 ********** 2026-03-09 00:46:17.704701 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.704712 | orchestrator | 2026-03-09 00:46:17.704723 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-09 00:46:17.704734 | orchestrator | Monday 09 March 2026 00:46:15 +0000 (0:00:00.146) 0:00:21.818 ********** 2026-03-09 00:46:17.704745 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.704756 | orchestrator | 2026-03-09 00:46:17.704767 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-09 00:46:17.704778 | orchestrator | Monday 09 March 2026 00:46:15 +0000 (0:00:00.151) 0:00:21.969 ********** 2026-03-09 00:46:17.704808 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.704819 | orchestrator | 2026-03-09 00:46:17.704830 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-09 00:46:17.704841 | orchestrator | Monday 09 March 2026 00:46:15 +0000 (0:00:00.143) 0:00:22.113 ********** 2026-03-09 00:46:17.704852 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.704863 | orchestrator | 2026-03-09 00:46:17.704874 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-09 00:46:17.704885 | orchestrator | Monday 09 March 2026 00:46:15 +0000 (0:00:00.142) 0:00:22.255 ********** 2026-03-09 00:46:17.704896 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.704907 | orchestrator | 2026-03-09 00:46:17.704917 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-09 00:46:17.704928 | orchestrator | Monday 09 March 2026 00:46:15 +0000 (0:00:00.141) 0:00:22.397 ********** 2026-03-09 00:46:17.704939 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.704950 | orchestrator | 2026-03-09 00:46:17.704961 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-09 00:46:17.704971 | orchestrator | Monday 09 March 2026 00:46:15 +0000 (0:00:00.143) 0:00:22.541 ********** 2026-03-09 00:46:17.704992 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.705003 | orchestrator | 2026-03-09 00:46:17.705014 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-09 00:46:17.705025 | orchestrator | Monday 09 March 2026 00:46:16 +0000 (0:00:00.146) 0:00:22.687 ********** 2026-03-09 00:46:17.705036 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.705047 | orchestrator | 2026-03-09 00:46:17.705059 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-09 00:46:17.705070 | orchestrator | Monday 09 March 2026 00:46:16 +0000 (0:00:00.164) 0:00:22.852 ********** 2026-03-09 00:46:17.705080 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.705091 | orchestrator | 2026-03-09 00:46:17.705102 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-09 00:46:17.705113 | orchestrator | Monday 09 March 2026 00:46:16 +0000 (0:00:00.151) 0:00:23.004 ********** 2026-03-09 00:46:17.705125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:17.705138 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:17.705149 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.705160 | orchestrator | 2026-03-09 00:46:17.705171 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-09 00:46:17.705182 | orchestrator | Monday 09 March 2026 00:46:16 +0000 (0:00:00.404) 0:00:23.408 ********** 2026-03-09 00:46:17.705193 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:17.705204 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:17.705215 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.705226 | orchestrator | 2026-03-09 00:46:17.705237 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-09 00:46:17.705248 | orchestrator | Monday 09 March 2026 00:46:17 +0000 (0:00:00.159) 0:00:23.567 ********** 2026-03-09 00:46:17.705259 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:17.705270 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:17.705281 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.705292 | orchestrator | 2026-03-09 00:46:17.705303 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-09 00:46:17.705313 | orchestrator | Monday 09 March 2026 00:46:17 +0000 (0:00:00.175) 0:00:23.743 ********** 2026-03-09 00:46:17.705324 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:17.705335 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:17.705346 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.705357 | orchestrator | 2026-03-09 00:46:17.705368 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-09 00:46:17.705379 | orchestrator | Monday 09 March 2026 00:46:17 +0000 (0:00:00.170) 0:00:23.913 ********** 2026-03-09 00:46:17.705389 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:17.705400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:17.705417 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:17.705428 | orchestrator | 2026-03-09 00:46:17.705439 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-09 00:46:17.705458 | orchestrator | Monday 09 March 2026 00:46:17 +0000 (0:00:00.179) 0:00:24.093 ********** 2026-03-09 00:46:17.705476 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:23.153818 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:23.153925 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:23.153942 | orchestrator | 2026-03-09 00:46:23.153956 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-09 00:46:23.153969 | orchestrator | Monday 09 March 2026 00:46:17 +0000 (0:00:00.162) 0:00:24.256 ********** 2026-03-09 00:46:23.153981 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:23.153994 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:23.154007 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:23.154083 | orchestrator | 2026-03-09 00:46:23.154097 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-09 00:46:23.154108 | orchestrator | Monday 09 March 2026 00:46:17 +0000 (0:00:00.171) 0:00:24.428 ********** 2026-03-09 00:46:23.154120 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:23.154132 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:23.154143 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:23.154156 | orchestrator | 2026-03-09 00:46:23.154167 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-09 00:46:23.154180 | orchestrator | Monday 09 March 2026 00:46:18 +0000 (0:00:00.173) 0:00:24.602 ********** 2026-03-09 00:46:23.154192 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:46:23.154205 | orchestrator | 2026-03-09 00:46:23.154217 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-09 00:46:23.154228 | orchestrator | Monday 09 March 2026 00:46:18 +0000 (0:00:00.497) 0:00:25.099 ********** 2026-03-09 00:46:23.154239 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:46:23.154250 | orchestrator | 2026-03-09 00:46:23.154262 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-09 00:46:23.154273 | orchestrator | Monday 09 March 2026 00:46:18 +0000 (0:00:00.458) 0:00:25.558 ********** 2026-03-09 00:46:23.154285 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:46:23.154296 | orchestrator | 2026-03-09 00:46:23.154307 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-09 00:46:23.154318 | orchestrator | Monday 09 March 2026 00:46:19 +0000 (0:00:00.145) 0:00:25.703 ********** 2026-03-09 00:46:23.154330 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'vg_name': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'}) 2026-03-09 00:46:23.154361 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'vg_name': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'}) 2026-03-09 00:46:23.154374 | orchestrator | 2026-03-09 00:46:23.154385 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-09 00:46:23.154398 | orchestrator | Monday 09 March 2026 00:46:19 +0000 (0:00:00.203) 0:00:25.906 ********** 2026-03-09 00:46:23.154410 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:23.154449 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:23.154464 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:23.154476 | orchestrator | 2026-03-09 00:46:23.154519 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-09 00:46:23.154531 | orchestrator | Monday 09 March 2026 00:46:19 +0000 (0:00:00.388) 0:00:26.294 ********** 2026-03-09 00:46:23.154544 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:23.154556 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:23.154568 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:23.154581 | orchestrator | 2026-03-09 00:46:23.154594 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-09 00:46:23.154608 | orchestrator | Monday 09 March 2026 00:46:19 +0000 (0:00:00.172) 0:00:26.466 ********** 2026-03-09 00:46:23.154621 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'})  2026-03-09 00:46:23.154634 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'})  2026-03-09 00:46:23.154646 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:46:23.154658 | orchestrator | 2026-03-09 00:46:23.154671 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-09 00:46:23.154683 | orchestrator | Monday 09 March 2026 00:46:20 +0000 (0:00:00.176) 0:00:26.643 ********** 2026-03-09 00:46:23.154720 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:46:23.154731 | orchestrator |  "lvm_report": { 2026-03-09 00:46:23.154741 | orchestrator |  "lv": [ 2026-03-09 00:46:23.154751 | orchestrator |  { 2026-03-09 00:46:23.154760 | orchestrator |  "lv_name": "osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3", 2026-03-09 00:46:23.154769 | orchestrator |  "vg_name": "ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3" 2026-03-09 00:46:23.154776 | orchestrator |  }, 2026-03-09 00:46:23.154784 | orchestrator |  { 2026-03-09 00:46:23.154791 | orchestrator |  "lv_name": "osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85", 2026-03-09 00:46:23.154798 | orchestrator |  "vg_name": "ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85" 2026-03-09 00:46:23.154805 | orchestrator |  } 2026-03-09 00:46:23.154817 | orchestrator |  ], 2026-03-09 00:46:23.154829 | orchestrator |  "pv": [ 2026-03-09 00:46:23.154841 | orchestrator |  { 2026-03-09 00:46:23.154853 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-09 00:46:23.154865 | orchestrator |  "vg_name": "ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85" 2026-03-09 00:46:23.154876 | orchestrator |  }, 2026-03-09 00:46:23.154887 | orchestrator |  { 2026-03-09 00:46:23.154899 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-09 00:46:23.154911 | orchestrator |  "vg_name": "ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3" 2026-03-09 00:46:23.154923 | orchestrator |  } 2026-03-09 00:46:23.154936 | orchestrator |  ] 2026-03-09 00:46:23.154948 | orchestrator |  } 2026-03-09 00:46:23.154959 | orchestrator | } 2026-03-09 00:46:23.154972 | orchestrator | 2026-03-09 00:46:23.154984 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-09 00:46:23.154996 | orchestrator | 2026-03-09 00:46:23.155009 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:46:23.155021 | orchestrator | Monday 09 March 2026 00:46:20 +0000 (0:00:00.293) 0:00:26.937 ********** 2026-03-09 00:46:23.155046 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-09 00:46:23.155059 | orchestrator | 2026-03-09 00:46:23.155070 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:46:23.155082 | orchestrator | Monday 09 March 2026 00:46:20 +0000 (0:00:00.250) 0:00:27.188 ********** 2026-03-09 00:46:23.155094 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:46:23.155106 | orchestrator | 2026-03-09 00:46:23.155119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:23.155131 | orchestrator | Monday 09 March 2026 00:46:20 +0000 (0:00:00.265) 0:00:27.454 ********** 2026-03-09 00:46:23.155143 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-09 00:46:23.155153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-09 00:46:23.155160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-09 00:46:23.155167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-09 00:46:23.155175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-09 00:46:23.155182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-09 00:46:23.155189 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-09 00:46:23.155205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-09 00:46:23.155212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-09 00:46:23.155220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-09 00:46:23.155227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-09 00:46:23.155234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-09 00:46:23.155242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-09 00:46:23.155249 | orchestrator | 2026-03-09 00:46:23.155256 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:23.155263 | orchestrator | Monday 09 March 2026 00:46:21 +0000 (0:00:00.430) 0:00:27.885 ********** 2026-03-09 00:46:23.155271 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:23.155278 | orchestrator | 2026-03-09 00:46:23.155285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:23.155293 | orchestrator | Monday 09 March 2026 00:46:21 +0000 (0:00:00.216) 0:00:28.101 ********** 2026-03-09 00:46:23.155300 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:23.155307 | orchestrator | 2026-03-09 00:46:23.155315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:23.155322 | orchestrator | Monday 09 March 2026 00:46:21 +0000 (0:00:00.220) 0:00:28.321 ********** 2026-03-09 00:46:23.155329 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:23.155336 | orchestrator | 2026-03-09 00:46:23.155344 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:23.155351 | orchestrator | Monday 09 March 2026 00:46:22 +0000 (0:00:00.703) 0:00:29.025 ********** 2026-03-09 00:46:23.155358 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:23.155365 | orchestrator | 2026-03-09 00:46:23.155373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:23.155380 | orchestrator | Monday 09 March 2026 00:46:22 +0000 (0:00:00.249) 0:00:29.274 ********** 2026-03-09 00:46:23.155387 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:23.155394 | orchestrator | 2026-03-09 00:46:23.155401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:23.155409 | orchestrator | Monday 09 March 2026 00:46:22 +0000 (0:00:00.219) 0:00:29.493 ********** 2026-03-09 00:46:23.155422 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:23.155429 | orchestrator | 2026-03-09 00:46:23.155445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:35.289601 | orchestrator | Monday 09 March 2026 00:46:23 +0000 (0:00:00.213) 0:00:29.707 ********** 2026-03-09 00:46:35.289744 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.289769 | orchestrator | 2026-03-09 00:46:35.289788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:35.289806 | orchestrator | Monday 09 March 2026 00:46:23 +0000 (0:00:00.250) 0:00:29.957 ********** 2026-03-09 00:46:35.289821 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.289836 | orchestrator | 2026-03-09 00:46:35.289852 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:35.289868 | orchestrator | Monday 09 March 2026 00:46:23 +0000 (0:00:00.245) 0:00:30.202 ********** 2026-03-09 00:46:35.289886 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127) 2026-03-09 00:46:35.289903 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127) 2026-03-09 00:46:35.289938 | orchestrator | 2026-03-09 00:46:35.289955 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:35.289970 | orchestrator | Monday 09 March 2026 00:46:24 +0000 (0:00:00.438) 0:00:30.641 ********** 2026-03-09 00:46:35.289984 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fb37f328-fd68-494b-bcff-294494d86f6d) 2026-03-09 00:46:35.290000 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fb37f328-fd68-494b-bcff-294494d86f6d) 2026-03-09 00:46:35.290082 | orchestrator | 2026-03-09 00:46:35.290106 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:35.290122 | orchestrator | Monday 09 March 2026 00:46:24 +0000 (0:00:00.476) 0:00:31.118 ********** 2026-03-09 00:46:35.290138 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_771f98cb-74e3-479e-8ec9-00fdc11a8238) 2026-03-09 00:46:35.290155 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_771f98cb-74e3-479e-8ec9-00fdc11a8238) 2026-03-09 00:46:35.290171 | orchestrator | 2026-03-09 00:46:35.290187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:35.290203 | orchestrator | Monday 09 March 2026 00:46:25 +0000 (0:00:00.466) 0:00:31.584 ********** 2026-03-09 00:46:35.290220 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_51b9e2da-28ed-40a7-8c18-598646420d16) 2026-03-09 00:46:35.290236 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_51b9e2da-28ed-40a7-8c18-598646420d16) 2026-03-09 00:46:35.290251 | orchestrator | 2026-03-09 00:46:35.290267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:35.290283 | orchestrator | Monday 09 March 2026 00:46:25 +0000 (0:00:00.882) 0:00:32.466 ********** 2026-03-09 00:46:35.290299 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:46:35.290315 | orchestrator | 2026-03-09 00:46:35.290330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:35.290344 | orchestrator | Monday 09 March 2026 00:46:26 +0000 (0:00:00.668) 0:00:33.134 ********** 2026-03-09 00:46:35.290368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-09 00:46:35.290383 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-09 00:46:35.290398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-09 00:46:35.290412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-09 00:46:35.290426 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-09 00:46:35.290467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-09 00:46:35.290587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-09 00:46:35.290607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-09 00:46:35.290622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-09 00:46:35.290636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-09 00:46:35.290650 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-09 00:46:35.290665 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-09 00:46:35.290684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-09 00:46:35.290707 | orchestrator | 2026-03-09 00:46:35.290731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:35.290756 | orchestrator | Monday 09 March 2026 00:46:27 +0000 (0:00:01.067) 0:00:34.202 ********** 2026-03-09 00:46:35.290780 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.290806 | orchestrator | 2026-03-09 00:46:35.290832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:35.290858 | orchestrator | Monday 09 March 2026 00:46:27 +0000 (0:00:00.316) 0:00:34.518 ********** 2026-03-09 00:46:35.290883 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.290907 | orchestrator | 2026-03-09 00:46:35.290927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:35.290944 | orchestrator | Monday 09 March 2026 00:46:28 +0000 (0:00:00.221) 0:00:34.740 ********** 2026-03-09 00:46:35.290956 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.290970 | orchestrator | 2026-03-09 00:46:35.291013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:35.291030 | orchestrator | Monday 09 March 2026 00:46:28 +0000 (0:00:00.224) 0:00:34.964 ********** 2026-03-09 00:46:35.291045 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.291061 | orchestrator | 2026-03-09 00:46:35.291077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:35.291092 | orchestrator | Monday 09 March 2026 00:46:28 +0000 (0:00:00.217) 0:00:35.182 ********** 2026-03-09 00:46:35.291108 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.291124 | orchestrator | 2026-03-09 00:46:35.291140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:35.291155 | orchestrator | Monday 09 March 2026 00:46:28 +0000 (0:00:00.209) 0:00:35.391 ********** 2026-03-09 00:46:35.291171 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.291187 | orchestrator | 2026-03-09 00:46:35.291203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:35.291218 | orchestrator | Monday 09 March 2026 00:46:29 +0000 (0:00:00.208) 0:00:35.600 ********** 2026-03-09 00:46:35.291234 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.291250 | orchestrator | 2026-03-09 00:46:35.291265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:35.291281 | orchestrator | Monday 09 March 2026 00:46:29 +0000 (0:00:00.217) 0:00:35.817 ********** 2026-03-09 00:46:35.291296 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.291309 | orchestrator | 2026-03-09 00:46:35.291322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:35.291336 | orchestrator | Monday 09 March 2026 00:46:29 +0000 (0:00:00.210) 0:00:36.027 ********** 2026-03-09 00:46:35.291349 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-09 00:46:35.291365 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-09 00:46:35.291381 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-09 00:46:35.291397 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-09 00:46:35.291413 | orchestrator | 2026-03-09 00:46:35.291429 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:35.291461 | orchestrator | Monday 09 March 2026 00:46:30 +0000 (0:00:00.891) 0:00:36.919 ********** 2026-03-09 00:46:35.291502 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.291519 | orchestrator | 2026-03-09 00:46:35.291534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:35.291548 | orchestrator | Monday 09 March 2026 00:46:30 +0000 (0:00:00.200) 0:00:37.120 ********** 2026-03-09 00:46:35.291562 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.291576 | orchestrator | 2026-03-09 00:46:35.291589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:35.291603 | orchestrator | Monday 09 March 2026 00:46:31 +0000 (0:00:00.696) 0:00:37.816 ********** 2026-03-09 00:46:35.291617 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.291631 | orchestrator | 2026-03-09 00:46:35.291647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:35.291662 | orchestrator | Monday 09 March 2026 00:46:31 +0000 (0:00:00.218) 0:00:38.035 ********** 2026-03-09 00:46:35.291678 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.291779 | orchestrator | 2026-03-09 00:46:35.291795 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-09 00:46:35.291821 | orchestrator | Monday 09 March 2026 00:46:31 +0000 (0:00:00.227) 0:00:38.263 ********** 2026-03-09 00:46:35.291835 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.291849 | orchestrator | 2026-03-09 00:46:35.291863 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-09 00:46:35.291877 | orchestrator | Monday 09 March 2026 00:46:31 +0000 (0:00:00.153) 0:00:38.416 ********** 2026-03-09 00:46:35.291892 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'}}) 2026-03-09 00:46:35.291907 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1060daf8-ac1b-51e4-8c2b-8176ae449cc2'}}) 2026-03-09 00:46:35.291921 | orchestrator | 2026-03-09 00:46:35.291935 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-09 00:46:35.291949 | orchestrator | Monday 09 March 2026 00:46:32 +0000 (0:00:00.185) 0:00:38.602 ********** 2026-03-09 00:46:35.291965 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'}) 2026-03-09 00:46:35.291982 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'}) 2026-03-09 00:46:35.291996 | orchestrator | 2026-03-09 00:46:35.292011 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-09 00:46:35.292025 | orchestrator | Monday 09 March 2026 00:46:33 +0000 (0:00:01.815) 0:00:40.417 ********** 2026-03-09 00:46:35.292039 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:35.292055 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:35.292070 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:35.292084 | orchestrator | 2026-03-09 00:46:35.292098 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-09 00:46:35.292113 | orchestrator | Monday 09 March 2026 00:46:34 +0000 (0:00:00.159) 0:00:40.577 ********** 2026-03-09 00:46:35.292127 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'}) 2026-03-09 00:46:35.292153 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'}) 2026-03-09 00:46:41.010368 | orchestrator | 2026-03-09 00:46:41.010469 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-09 00:46:41.010541 | orchestrator | Monday 09 March 2026 00:46:35 +0000 (0:00:01.259) 0:00:41.837 ********** 2026-03-09 00:46:41.010555 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:41.010568 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:41.010579 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.010591 | orchestrator | 2026-03-09 00:46:41.010603 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-09 00:46:41.010614 | orchestrator | Monday 09 March 2026 00:46:35 +0000 (0:00:00.173) 0:00:42.010 ********** 2026-03-09 00:46:41.010625 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.010636 | orchestrator | 2026-03-09 00:46:41.010648 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-09 00:46:41.010659 | orchestrator | Monday 09 March 2026 00:46:35 +0000 (0:00:00.168) 0:00:42.179 ********** 2026-03-09 00:46:41.010670 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:41.010681 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:41.010692 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.010703 | orchestrator | 2026-03-09 00:46:41.010714 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-09 00:46:41.010725 | orchestrator | Monday 09 March 2026 00:46:35 +0000 (0:00:00.184) 0:00:42.363 ********** 2026-03-09 00:46:41.010736 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.010747 | orchestrator | 2026-03-09 00:46:41.010758 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-09 00:46:41.010769 | orchestrator | Monday 09 March 2026 00:46:35 +0000 (0:00:00.150) 0:00:42.514 ********** 2026-03-09 00:46:41.010780 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:41.010791 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:41.010802 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.010813 | orchestrator | 2026-03-09 00:46:41.010824 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-09 00:46:41.010848 | orchestrator | Monday 09 March 2026 00:46:36 +0000 (0:00:00.384) 0:00:42.899 ********** 2026-03-09 00:46:41.010859 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.010870 | orchestrator | 2026-03-09 00:46:41.010881 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-09 00:46:41.010892 | orchestrator | Monday 09 March 2026 00:46:36 +0000 (0:00:00.141) 0:00:43.041 ********** 2026-03-09 00:46:41.010905 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:41.010917 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:41.010930 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.010943 | orchestrator | 2026-03-09 00:46:41.010957 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-09 00:46:41.010970 | orchestrator | Monday 09 March 2026 00:46:36 +0000 (0:00:00.170) 0:00:43.212 ********** 2026-03-09 00:46:41.010983 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:46:41.010997 | orchestrator | 2026-03-09 00:46:41.011009 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-09 00:46:41.011030 | orchestrator | Monday 09 March 2026 00:46:36 +0000 (0:00:00.145) 0:00:43.357 ********** 2026-03-09 00:46:41.011045 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:41.011058 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:41.011070 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.011081 | orchestrator | 2026-03-09 00:46:41.011092 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-09 00:46:41.011103 | orchestrator | Monday 09 March 2026 00:46:36 +0000 (0:00:00.145) 0:00:43.503 ********** 2026-03-09 00:46:41.011114 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:41.011125 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:41.011136 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.011147 | orchestrator | 2026-03-09 00:46:41.011158 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-09 00:46:41.011187 | orchestrator | Monday 09 March 2026 00:46:37 +0000 (0:00:00.146) 0:00:43.649 ********** 2026-03-09 00:46:41.011205 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:41.011224 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:41.011235 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.011246 | orchestrator | 2026-03-09 00:46:41.011257 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-09 00:46:41.011268 | orchestrator | Monday 09 March 2026 00:46:37 +0000 (0:00:00.164) 0:00:43.813 ********** 2026-03-09 00:46:41.011278 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.011289 | orchestrator | 2026-03-09 00:46:41.011300 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-09 00:46:41.011310 | orchestrator | Monday 09 March 2026 00:46:37 +0000 (0:00:00.135) 0:00:43.949 ********** 2026-03-09 00:46:41.011321 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.011332 | orchestrator | 2026-03-09 00:46:41.011342 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-09 00:46:41.011353 | orchestrator | Monday 09 March 2026 00:46:37 +0000 (0:00:00.146) 0:00:44.095 ********** 2026-03-09 00:46:41.011364 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.011375 | orchestrator | 2026-03-09 00:46:41.011386 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-09 00:46:41.011396 | orchestrator | Monday 09 March 2026 00:46:37 +0000 (0:00:00.132) 0:00:44.228 ********** 2026-03-09 00:46:41.011407 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:46:41.011418 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-09 00:46:41.011429 | orchestrator | } 2026-03-09 00:46:41.011439 | orchestrator | 2026-03-09 00:46:41.011450 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-09 00:46:41.011461 | orchestrator | Monday 09 March 2026 00:46:37 +0000 (0:00:00.159) 0:00:44.387 ********** 2026-03-09 00:46:41.011472 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:46:41.011542 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-09 00:46:41.011554 | orchestrator | } 2026-03-09 00:46:41.011565 | orchestrator | 2026-03-09 00:46:41.011576 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-09 00:46:41.011587 | orchestrator | Monday 09 March 2026 00:46:37 +0000 (0:00:00.163) 0:00:44.551 ********** 2026-03-09 00:46:41.011606 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:46:41.011617 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-09 00:46:41.011628 | orchestrator | } 2026-03-09 00:46:41.011638 | orchestrator | 2026-03-09 00:46:41.011649 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-09 00:46:41.011660 | orchestrator | Monday 09 March 2026 00:46:38 +0000 (0:00:00.313) 0:00:44.865 ********** 2026-03-09 00:46:41.011671 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:46:41.011681 | orchestrator | 2026-03-09 00:46:41.011692 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-09 00:46:41.011703 | orchestrator | Monday 09 March 2026 00:46:38 +0000 (0:00:00.504) 0:00:45.369 ********** 2026-03-09 00:46:41.011714 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:46:41.011725 | orchestrator | 2026-03-09 00:46:41.011736 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-09 00:46:41.011747 | orchestrator | Monday 09 March 2026 00:46:39 +0000 (0:00:00.503) 0:00:45.872 ********** 2026-03-09 00:46:41.011758 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:46:41.011769 | orchestrator | 2026-03-09 00:46:41.011780 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-09 00:46:41.011790 | orchestrator | Monday 09 March 2026 00:46:39 +0000 (0:00:00.493) 0:00:46.366 ********** 2026-03-09 00:46:41.011801 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:46:41.011812 | orchestrator | 2026-03-09 00:46:41.011822 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-09 00:46:41.011833 | orchestrator | Monday 09 March 2026 00:46:39 +0000 (0:00:00.183) 0:00:46.549 ********** 2026-03-09 00:46:41.011844 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.011855 | orchestrator | 2026-03-09 00:46:41.011873 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-09 00:46:41.011884 | orchestrator | Monday 09 March 2026 00:46:40 +0000 (0:00:00.115) 0:00:46.665 ********** 2026-03-09 00:46:41.011895 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.011906 | orchestrator | 2026-03-09 00:46:41.011917 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-09 00:46:41.011928 | orchestrator | Monday 09 March 2026 00:46:40 +0000 (0:00:00.128) 0:00:46.794 ********** 2026-03-09 00:46:41.011938 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:46:41.011950 | orchestrator |  "vgs_report": { 2026-03-09 00:46:41.011961 | orchestrator |  "vg": [] 2026-03-09 00:46:41.011972 | orchestrator |  } 2026-03-09 00:46:41.011983 | orchestrator | } 2026-03-09 00:46:41.011994 | orchestrator | 2026-03-09 00:46:41.012005 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-09 00:46:41.012016 | orchestrator | Monday 09 March 2026 00:46:40 +0000 (0:00:00.151) 0:00:46.945 ********** 2026-03-09 00:46:41.012027 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.012038 | orchestrator | 2026-03-09 00:46:41.012048 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-09 00:46:41.012059 | orchestrator | Monday 09 March 2026 00:46:40 +0000 (0:00:00.156) 0:00:47.101 ********** 2026-03-09 00:46:41.012070 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.012081 | orchestrator | 2026-03-09 00:46:41.012092 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-09 00:46:41.012103 | orchestrator | Monday 09 March 2026 00:46:40 +0000 (0:00:00.161) 0:00:47.263 ********** 2026-03-09 00:46:41.012114 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.012125 | orchestrator | 2026-03-09 00:46:41.012136 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-09 00:46:41.012147 | orchestrator | Monday 09 March 2026 00:46:40 +0000 (0:00:00.140) 0:00:47.403 ********** 2026-03-09 00:46:41.012157 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:41.012168 | orchestrator | 2026-03-09 00:46:41.012187 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-09 00:46:46.015117 | orchestrator | Monday 09 March 2026 00:46:41 +0000 (0:00:00.156) 0:00:47.560 ********** 2026-03-09 00:46:46.015268 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.015298 | orchestrator | 2026-03-09 00:46:46.015313 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-09 00:46:46.015337 | orchestrator | Monday 09 March 2026 00:46:41 +0000 (0:00:00.353) 0:00:47.914 ********** 2026-03-09 00:46:46.015348 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.015360 | orchestrator | 2026-03-09 00:46:46.015371 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-09 00:46:46.015382 | orchestrator | Monday 09 March 2026 00:46:41 +0000 (0:00:00.145) 0:00:48.059 ********** 2026-03-09 00:46:46.015393 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.015404 | orchestrator | 2026-03-09 00:46:46.015415 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-09 00:46:46.015426 | orchestrator | Monday 09 March 2026 00:46:41 +0000 (0:00:00.132) 0:00:48.192 ********** 2026-03-09 00:46:46.015437 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.015448 | orchestrator | 2026-03-09 00:46:46.015459 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-09 00:46:46.015470 | orchestrator | Monday 09 March 2026 00:46:41 +0000 (0:00:00.135) 0:00:48.327 ********** 2026-03-09 00:46:46.015556 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.015568 | orchestrator | 2026-03-09 00:46:46.015579 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-09 00:46:46.015590 | orchestrator | Monday 09 March 2026 00:46:41 +0000 (0:00:00.144) 0:00:48.472 ********** 2026-03-09 00:46:46.015601 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.015612 | orchestrator | 2026-03-09 00:46:46.015623 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-09 00:46:46.015634 | orchestrator | Monday 09 March 2026 00:46:42 +0000 (0:00:00.142) 0:00:48.614 ********** 2026-03-09 00:46:46.015644 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.015658 | orchestrator | 2026-03-09 00:46:46.015671 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-09 00:46:46.015684 | orchestrator | Monday 09 March 2026 00:46:42 +0000 (0:00:00.141) 0:00:48.756 ********** 2026-03-09 00:46:46.015697 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.015709 | orchestrator | 2026-03-09 00:46:46.015723 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-09 00:46:46.015736 | orchestrator | Monday 09 March 2026 00:46:42 +0000 (0:00:00.153) 0:00:48.909 ********** 2026-03-09 00:46:46.015749 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.015761 | orchestrator | 2026-03-09 00:46:46.015774 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-09 00:46:46.015787 | orchestrator | Monday 09 March 2026 00:46:42 +0000 (0:00:00.150) 0:00:49.060 ********** 2026-03-09 00:46:46.015800 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.015814 | orchestrator | 2026-03-09 00:46:46.015828 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-09 00:46:46.015858 | orchestrator | Monday 09 March 2026 00:46:42 +0000 (0:00:00.145) 0:00:49.205 ********** 2026-03-09 00:46:46.015873 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:46.015888 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:46.015901 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.015915 | orchestrator | 2026-03-09 00:46:46.015928 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-09 00:46:46.015941 | orchestrator | Monday 09 March 2026 00:46:42 +0000 (0:00:00.152) 0:00:49.358 ********** 2026-03-09 00:46:46.015952 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:46.015972 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:46.015983 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.015994 | orchestrator | 2026-03-09 00:46:46.016004 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-09 00:46:46.016015 | orchestrator | Monday 09 March 2026 00:46:42 +0000 (0:00:00.166) 0:00:49.524 ********** 2026-03-09 00:46:46.016026 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:46.016037 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:46.016048 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.016059 | orchestrator | 2026-03-09 00:46:46.016070 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-09 00:46:46.016081 | orchestrator | Monday 09 March 2026 00:46:43 +0000 (0:00:00.421) 0:00:49.946 ********** 2026-03-09 00:46:46.016092 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:46.016103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:46.016114 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.016125 | orchestrator | 2026-03-09 00:46:46.016154 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-09 00:46:46.016166 | orchestrator | Monday 09 March 2026 00:46:43 +0000 (0:00:00.169) 0:00:50.115 ********** 2026-03-09 00:46:46.016177 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:46.016188 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:46.016199 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.016210 | orchestrator | 2026-03-09 00:46:46.016221 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-09 00:46:46.016232 | orchestrator | Monday 09 March 2026 00:46:43 +0000 (0:00:00.181) 0:00:50.297 ********** 2026-03-09 00:46:46.016243 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:46.016255 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:46.016266 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.016277 | orchestrator | 2026-03-09 00:46:46.016288 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-09 00:46:46.016299 | orchestrator | Monday 09 March 2026 00:46:43 +0000 (0:00:00.175) 0:00:50.473 ********** 2026-03-09 00:46:46.016310 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:46.016321 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:46.016332 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.016343 | orchestrator | 2026-03-09 00:46:46.016354 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-09 00:46:46.016365 | orchestrator | Monday 09 March 2026 00:46:44 +0000 (0:00:00.169) 0:00:50.643 ********** 2026-03-09 00:46:46.016376 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:46.016394 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:46.016410 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.016421 | orchestrator | 2026-03-09 00:46:46.016433 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-09 00:46:46.016444 | orchestrator | Monday 09 March 2026 00:46:44 +0000 (0:00:00.163) 0:00:50.806 ********** 2026-03-09 00:46:46.016455 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:46:46.016466 | orchestrator | 2026-03-09 00:46:46.016502 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-09 00:46:46.016520 | orchestrator | Monday 09 March 2026 00:46:44 +0000 (0:00:00.528) 0:00:51.334 ********** 2026-03-09 00:46:46.016539 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:46:46.016558 | orchestrator | 2026-03-09 00:46:46.016577 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-09 00:46:46.016595 | orchestrator | Monday 09 March 2026 00:46:45 +0000 (0:00:00.509) 0:00:51.844 ********** 2026-03-09 00:46:46.016611 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:46:46.016622 | orchestrator | 2026-03-09 00:46:46.016633 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-09 00:46:46.016644 | orchestrator | Monday 09 March 2026 00:46:45 +0000 (0:00:00.159) 0:00:52.004 ********** 2026-03-09 00:46:46.016655 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'vg_name': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'}) 2026-03-09 00:46:46.016668 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'vg_name': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'}) 2026-03-09 00:46:46.016679 | orchestrator | 2026-03-09 00:46:46.016690 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-09 00:46:46.016701 | orchestrator | Monday 09 March 2026 00:46:45 +0000 (0:00:00.190) 0:00:52.195 ********** 2026-03-09 00:46:46.016712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:46.016723 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:46.016734 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:46.016745 | orchestrator | 2026-03-09 00:46:46.016756 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-09 00:46:46.016767 | orchestrator | Monday 09 March 2026 00:46:45 +0000 (0:00:00.202) 0:00:52.397 ********** 2026-03-09 00:46:46.016778 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:46.016797 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:52.749140 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:52.749223 | orchestrator | 2026-03-09 00:46:52.749234 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-09 00:46:52.749243 | orchestrator | Monday 09 March 2026 00:46:46 +0000 (0:00:00.170) 0:00:52.567 ********** 2026-03-09 00:46:52.749249 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'})  2026-03-09 00:46:52.749258 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'})  2026-03-09 00:46:52.749264 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:46:52.749271 | orchestrator | 2026-03-09 00:46:52.749277 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-09 00:46:52.749302 | orchestrator | Monday 09 March 2026 00:46:46 +0000 (0:00:00.170) 0:00:52.738 ********** 2026-03-09 00:46:52.749309 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:46:52.749316 | orchestrator |  "lvm_report": { 2026-03-09 00:46:52.749324 | orchestrator |  "lv": [ 2026-03-09 00:46:52.749331 | orchestrator |  { 2026-03-09 00:46:52.749337 | orchestrator |  "lv_name": "osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2", 2026-03-09 00:46:52.749344 | orchestrator |  "vg_name": "ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2" 2026-03-09 00:46:52.749351 | orchestrator |  }, 2026-03-09 00:46:52.749357 | orchestrator |  { 2026-03-09 00:46:52.749363 | orchestrator |  "lv_name": "osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0", 2026-03-09 00:46:52.749370 | orchestrator |  "vg_name": "ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0" 2026-03-09 00:46:52.749376 | orchestrator |  } 2026-03-09 00:46:52.749382 | orchestrator |  ], 2026-03-09 00:46:52.749388 | orchestrator |  "pv": [ 2026-03-09 00:46:52.749395 | orchestrator |  { 2026-03-09 00:46:52.749401 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-09 00:46:52.749408 | orchestrator |  "vg_name": "ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0" 2026-03-09 00:46:52.749414 | orchestrator |  }, 2026-03-09 00:46:52.749420 | orchestrator |  { 2026-03-09 00:46:52.749426 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-09 00:46:52.749433 | orchestrator |  "vg_name": "ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2" 2026-03-09 00:46:52.749439 | orchestrator |  } 2026-03-09 00:46:52.749445 | orchestrator |  ] 2026-03-09 00:46:52.749451 | orchestrator |  } 2026-03-09 00:46:52.749458 | orchestrator | } 2026-03-09 00:46:52.749464 | orchestrator | 2026-03-09 00:46:52.749471 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-09 00:46:52.749552 | orchestrator | 2026-03-09 00:46:52.749559 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:46:52.749565 | orchestrator | Monday 09 March 2026 00:46:46 +0000 (0:00:00.554) 0:00:53.293 ********** 2026-03-09 00:46:52.749571 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-09 00:46:52.749578 | orchestrator | 2026-03-09 00:46:52.749584 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:46:52.749591 | orchestrator | Monday 09 March 2026 00:46:47 +0000 (0:00:00.291) 0:00:53.584 ********** 2026-03-09 00:46:52.749598 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:46:52.749604 | orchestrator | 2026-03-09 00:46:52.749610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:52.749617 | orchestrator | Monday 09 March 2026 00:46:47 +0000 (0:00:00.279) 0:00:53.864 ********** 2026-03-09 00:46:52.749623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-09 00:46:52.749630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-09 00:46:52.749636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-09 00:46:52.749642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-09 00:46:52.749648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-09 00:46:52.749654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-09 00:46:52.749660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-09 00:46:52.749666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-09 00:46:52.749673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-09 00:46:52.749679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-09 00:46:52.749691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-09 00:46:52.749698 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-09 00:46:52.749706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-09 00:46:52.749713 | orchestrator | 2026-03-09 00:46:52.749721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:52.749731 | orchestrator | Monday 09 March 2026 00:46:47 +0000 (0:00:00.463) 0:00:54.328 ********** 2026-03-09 00:46:52.749738 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:46:52.749746 | orchestrator | 2026-03-09 00:46:52.749753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:52.749760 | orchestrator | Monday 09 March 2026 00:46:47 +0000 (0:00:00.204) 0:00:54.532 ********** 2026-03-09 00:46:52.749768 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:46:52.749775 | orchestrator | 2026-03-09 00:46:52.749782 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:52.749802 | orchestrator | Monday 09 March 2026 00:46:48 +0000 (0:00:00.195) 0:00:54.728 ********** 2026-03-09 00:46:52.749810 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:46:52.749817 | orchestrator | 2026-03-09 00:46:52.749825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:52.749832 | orchestrator | Monday 09 March 2026 00:46:48 +0000 (0:00:00.208) 0:00:54.936 ********** 2026-03-09 00:46:52.749839 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:46:52.749846 | orchestrator | 2026-03-09 00:46:52.749853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:52.749897 | orchestrator | Monday 09 March 2026 00:46:48 +0000 (0:00:00.249) 0:00:55.185 ********** 2026-03-09 00:46:52.749905 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:46:52.749912 | orchestrator | 2026-03-09 00:46:52.749919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:52.749926 | orchestrator | Monday 09 March 2026 00:46:49 +0000 (0:00:00.678) 0:00:55.864 ********** 2026-03-09 00:46:52.749933 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:46:52.749940 | orchestrator | 2026-03-09 00:46:52.749947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:52.749955 | orchestrator | Monday 09 March 2026 00:46:49 +0000 (0:00:00.205) 0:00:56.070 ********** 2026-03-09 00:46:52.749962 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:46:52.749969 | orchestrator | 2026-03-09 00:46:52.749977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:52.749984 | orchestrator | Monday 09 March 2026 00:46:49 +0000 (0:00:00.225) 0:00:56.295 ********** 2026-03-09 00:46:52.749991 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:46:52.749998 | orchestrator | 2026-03-09 00:46:52.750005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:52.750061 | orchestrator | Monday 09 March 2026 00:46:49 +0000 (0:00:00.221) 0:00:56.516 ********** 2026-03-09 00:46:52.750070 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3) 2026-03-09 00:46:52.750079 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3) 2026-03-09 00:46:52.750087 | orchestrator | 2026-03-09 00:46:52.750094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:52.750102 | orchestrator | Monday 09 March 2026 00:46:50 +0000 (0:00:00.465) 0:00:56.982 ********** 2026-03-09 00:46:52.750109 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bf4da7fe-59ae-42e8-92ff-fb55dbc42396) 2026-03-09 00:46:52.750117 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bf4da7fe-59ae-42e8-92ff-fb55dbc42396) 2026-03-09 00:46:52.750123 | orchestrator | 2026-03-09 00:46:52.750129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:52.750145 | orchestrator | Monday 09 March 2026 00:46:50 +0000 (0:00:00.496) 0:00:57.478 ********** 2026-03-09 00:46:52.750152 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d616dde6-c913-49b8-b8ef-90f7cc767ff0) 2026-03-09 00:46:52.750158 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d616dde6-c913-49b8-b8ef-90f7cc767ff0) 2026-03-09 00:46:52.750165 | orchestrator | 2026-03-09 00:46:52.750171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:52.750177 | orchestrator | Monday 09 March 2026 00:46:51 +0000 (0:00:00.504) 0:00:57.983 ********** 2026-03-09 00:46:52.750183 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7ad7d39e-c79f-49cf-9f83-32481f17a0bc) 2026-03-09 00:46:52.750190 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7ad7d39e-c79f-49cf-9f83-32481f17a0bc) 2026-03-09 00:46:52.750196 | orchestrator | 2026-03-09 00:46:52.750202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:46:52.750209 | orchestrator | Monday 09 March 2026 00:46:51 +0000 (0:00:00.490) 0:00:58.474 ********** 2026-03-09 00:46:52.750215 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:46:52.750221 | orchestrator | 2026-03-09 00:46:52.750227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:46:52.750234 | orchestrator | Monday 09 March 2026 00:46:52 +0000 (0:00:00.364) 0:00:58.838 ********** 2026-03-09 00:46:52.750240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-09 00:46:52.750246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-09 00:46:52.750252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-09 00:46:52.750258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-09 00:46:52.750265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-09 00:46:52.750271 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-09 00:46:52.750277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-09 00:46:52.750283 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-09 00:46:52.750289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-09 00:46:52.750296 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-09 00:46:52.750302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-09 00:46:52.750314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-09 00:47:02.036682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-09 00:47:02.036777 | orchestrator | 2026-03-09 00:47:02.036788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:47:02.036796 | orchestrator | Monday 09 March 2026 00:46:52 +0000 (0:00:00.459) 0:00:59.298 ********** 2026-03-09 00:47:02.036802 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.036810 | orchestrator | 2026-03-09 00:47:02.036817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:47:02.036823 | orchestrator | Monday 09 March 2026 00:46:52 +0000 (0:00:00.211) 0:00:59.510 ********** 2026-03-09 00:47:02.036830 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.036836 | orchestrator | 2026-03-09 00:47:02.036842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:47:02.036848 | orchestrator | Monday 09 March 2026 00:46:53 +0000 (0:00:00.753) 0:01:00.263 ********** 2026-03-09 00:47:02.036855 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.036882 | orchestrator | 2026-03-09 00:47:02.036889 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:47:02.036895 | orchestrator | Monday 09 March 2026 00:46:53 +0000 (0:00:00.202) 0:01:00.466 ********** 2026-03-09 00:47:02.036902 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.036908 | orchestrator | 2026-03-09 00:47:02.036914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:47:02.036921 | orchestrator | Monday 09 March 2026 00:46:54 +0000 (0:00:00.250) 0:01:00.716 ********** 2026-03-09 00:47:02.036927 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.036933 | orchestrator | 2026-03-09 00:47:02.036939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:47:02.036946 | orchestrator | Monday 09 March 2026 00:46:54 +0000 (0:00:00.278) 0:01:00.994 ********** 2026-03-09 00:47:02.036952 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.036958 | orchestrator | 2026-03-09 00:47:02.036964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:47:02.036971 | orchestrator | Monday 09 March 2026 00:46:54 +0000 (0:00:00.245) 0:01:01.240 ********** 2026-03-09 00:47:02.036977 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.036983 | orchestrator | 2026-03-09 00:47:02.036989 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:47:02.036995 | orchestrator | Monday 09 March 2026 00:46:54 +0000 (0:00:00.209) 0:01:01.450 ********** 2026-03-09 00:47:02.037002 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.037008 | orchestrator | 2026-03-09 00:47:02.037014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:47:02.037020 | orchestrator | Monday 09 March 2026 00:46:55 +0000 (0:00:00.199) 0:01:01.650 ********** 2026-03-09 00:47:02.037027 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-09 00:47:02.037052 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-09 00:47:02.037064 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-09 00:47:02.037075 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-09 00:47:02.037085 | orchestrator | 2026-03-09 00:47:02.037095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:47:02.037105 | orchestrator | Monday 09 March 2026 00:46:55 +0000 (0:00:00.684) 0:01:02.334 ********** 2026-03-09 00:47:02.037114 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.037123 | orchestrator | 2026-03-09 00:47:02.037133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:47:02.037144 | orchestrator | Monday 09 March 2026 00:46:56 +0000 (0:00:00.287) 0:01:02.622 ********** 2026-03-09 00:47:02.037154 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.037166 | orchestrator | 2026-03-09 00:47:02.037177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:47:02.037187 | orchestrator | Monday 09 March 2026 00:46:56 +0000 (0:00:00.236) 0:01:02.858 ********** 2026-03-09 00:47:02.037198 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.037220 | orchestrator | 2026-03-09 00:47:02.037236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:47:02.037248 | orchestrator | Monday 09 March 2026 00:46:56 +0000 (0:00:00.197) 0:01:03.056 ********** 2026-03-09 00:47:02.037259 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.037269 | orchestrator | 2026-03-09 00:47:02.037280 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-09 00:47:02.037290 | orchestrator | Monday 09 March 2026 00:46:56 +0000 (0:00:00.213) 0:01:03.270 ********** 2026-03-09 00:47:02.037299 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.037309 | orchestrator | 2026-03-09 00:47:02.037318 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-09 00:47:02.037328 | orchestrator | Monday 09 March 2026 00:46:57 +0000 (0:00:00.357) 0:01:03.628 ********** 2026-03-09 00:47:02.037338 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'}}) 2026-03-09 00:47:02.037359 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bfced398-94c6-51d2-a38a-d9d8acf734fd'}}) 2026-03-09 00:47:02.037369 | orchestrator | 2026-03-09 00:47:02.037380 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-09 00:47:02.037391 | orchestrator | Monday 09 March 2026 00:46:57 +0000 (0:00:00.250) 0:01:03.878 ********** 2026-03-09 00:47:02.037402 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'}) 2026-03-09 00:47:02.037414 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'}) 2026-03-09 00:47:02.037421 | orchestrator | 2026-03-09 00:47:02.037428 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-09 00:47:02.037448 | orchestrator | Monday 09 March 2026 00:46:59 +0000 (0:00:01.750) 0:01:05.629 ********** 2026-03-09 00:47:02.037455 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:02.037463 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:02.037495 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.037503 | orchestrator | 2026-03-09 00:47:02.037509 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-09 00:47:02.037515 | orchestrator | Monday 09 March 2026 00:46:59 +0000 (0:00:00.158) 0:01:05.787 ********** 2026-03-09 00:47:02.037522 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'}) 2026-03-09 00:47:02.037529 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'}) 2026-03-09 00:47:02.037535 | orchestrator | 2026-03-09 00:47:02.037541 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-09 00:47:02.037547 | orchestrator | Monday 09 March 2026 00:47:00 +0000 (0:00:01.183) 0:01:06.970 ********** 2026-03-09 00:47:02.037554 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:02.037560 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:02.037566 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.037572 | orchestrator | 2026-03-09 00:47:02.037579 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-09 00:47:02.037585 | orchestrator | Monday 09 March 2026 00:47:00 +0000 (0:00:00.169) 0:01:07.140 ********** 2026-03-09 00:47:02.037591 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.037597 | orchestrator | 2026-03-09 00:47:02.037603 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-09 00:47:02.037610 | orchestrator | Monday 09 March 2026 00:47:00 +0000 (0:00:00.142) 0:01:07.283 ********** 2026-03-09 00:47:02.037616 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:02.037628 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:02.037634 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.037641 | orchestrator | 2026-03-09 00:47:02.037647 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-09 00:47:02.037653 | orchestrator | Monday 09 March 2026 00:47:00 +0000 (0:00:00.158) 0:01:07.441 ********** 2026-03-09 00:47:02.037664 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.037670 | orchestrator | 2026-03-09 00:47:02.037677 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-09 00:47:02.037683 | orchestrator | Monday 09 March 2026 00:47:01 +0000 (0:00:00.137) 0:01:07.578 ********** 2026-03-09 00:47:02.037690 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:02.037696 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:02.037702 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.037708 | orchestrator | 2026-03-09 00:47:02.037715 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-09 00:47:02.037721 | orchestrator | Monday 09 March 2026 00:47:01 +0000 (0:00:00.151) 0:01:07.730 ********** 2026-03-09 00:47:02.037727 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.037733 | orchestrator | 2026-03-09 00:47:02.037740 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-09 00:47:02.037746 | orchestrator | Monday 09 March 2026 00:47:01 +0000 (0:00:00.139) 0:01:07.870 ********** 2026-03-09 00:47:02.037752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:02.037758 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:02.037765 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:02.037771 | orchestrator | 2026-03-09 00:47:02.037777 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-09 00:47:02.037783 | orchestrator | Monday 09 March 2026 00:47:01 +0000 (0:00:00.158) 0:01:08.028 ********** 2026-03-09 00:47:02.037789 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:47:02.037796 | orchestrator | 2026-03-09 00:47:02.037802 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-09 00:47:02.037809 | orchestrator | Monday 09 March 2026 00:47:01 +0000 (0:00:00.388) 0:01:08.416 ********** 2026-03-09 00:47:02.037820 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:08.268290 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:08.268394 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.268409 | orchestrator | 2026-03-09 00:47:08.268422 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-09 00:47:08.268436 | orchestrator | Monday 09 March 2026 00:47:02 +0000 (0:00:00.173) 0:01:08.590 ********** 2026-03-09 00:47:08.268447 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:08.268459 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:08.268584 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.268606 | orchestrator | 2026-03-09 00:47:08.268625 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-09 00:47:08.268643 | orchestrator | Monday 09 March 2026 00:47:02 +0000 (0:00:00.157) 0:01:08.748 ********** 2026-03-09 00:47:08.268660 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:08.268678 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:08.268731 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.268751 | orchestrator | 2026-03-09 00:47:08.268770 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-09 00:47:08.268789 | orchestrator | Monday 09 March 2026 00:47:02 +0000 (0:00:00.170) 0:01:08.918 ********** 2026-03-09 00:47:08.268809 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.268829 | orchestrator | 2026-03-09 00:47:08.268848 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-09 00:47:08.268863 | orchestrator | Monday 09 March 2026 00:47:02 +0000 (0:00:00.130) 0:01:09.048 ********** 2026-03-09 00:47:08.268876 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.268888 | orchestrator | 2026-03-09 00:47:08.268902 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-09 00:47:08.268915 | orchestrator | Monday 09 March 2026 00:47:02 +0000 (0:00:00.135) 0:01:09.184 ********** 2026-03-09 00:47:08.268927 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.268941 | orchestrator | 2026-03-09 00:47:08.268953 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-09 00:47:08.268967 | orchestrator | Monday 09 March 2026 00:47:02 +0000 (0:00:00.127) 0:01:09.312 ********** 2026-03-09 00:47:08.268980 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:47:08.268994 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-09 00:47:08.269007 | orchestrator | } 2026-03-09 00:47:08.269020 | orchestrator | 2026-03-09 00:47:08.269032 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-09 00:47:08.269046 | orchestrator | Monday 09 March 2026 00:47:02 +0000 (0:00:00.151) 0:01:09.463 ********** 2026-03-09 00:47:08.269059 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:47:08.269072 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-09 00:47:08.269084 | orchestrator | } 2026-03-09 00:47:08.269098 | orchestrator | 2026-03-09 00:47:08.269111 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-09 00:47:08.269130 | orchestrator | Monday 09 March 2026 00:47:03 +0000 (0:00:00.138) 0:01:09.602 ********** 2026-03-09 00:47:08.269149 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:47:08.269168 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-09 00:47:08.269186 | orchestrator | } 2026-03-09 00:47:08.269204 | orchestrator | 2026-03-09 00:47:08.269221 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-09 00:47:08.269238 | orchestrator | Monday 09 March 2026 00:47:03 +0000 (0:00:00.180) 0:01:09.783 ********** 2026-03-09 00:47:08.269256 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:47:08.269275 | orchestrator | 2026-03-09 00:47:08.269293 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-09 00:47:08.269312 | orchestrator | Monday 09 March 2026 00:47:03 +0000 (0:00:00.522) 0:01:10.305 ********** 2026-03-09 00:47:08.269331 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:47:08.269350 | orchestrator | 2026-03-09 00:47:08.269368 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-09 00:47:08.269387 | orchestrator | Monday 09 March 2026 00:47:04 +0000 (0:00:00.556) 0:01:10.862 ********** 2026-03-09 00:47:08.269406 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:47:08.269425 | orchestrator | 2026-03-09 00:47:08.269444 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-09 00:47:08.269463 | orchestrator | Monday 09 March 2026 00:47:05 +0000 (0:00:00.737) 0:01:11.600 ********** 2026-03-09 00:47:08.269509 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:47:08.269529 | orchestrator | 2026-03-09 00:47:08.269550 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-09 00:47:08.269567 | orchestrator | Monday 09 March 2026 00:47:05 +0000 (0:00:00.150) 0:01:11.751 ********** 2026-03-09 00:47:08.269584 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.269601 | orchestrator | 2026-03-09 00:47:08.269619 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-09 00:47:08.269651 | orchestrator | Monday 09 March 2026 00:47:05 +0000 (0:00:00.106) 0:01:11.857 ********** 2026-03-09 00:47:08.269669 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.269686 | orchestrator | 2026-03-09 00:47:08.269705 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-09 00:47:08.269745 | orchestrator | Monday 09 March 2026 00:47:05 +0000 (0:00:00.114) 0:01:11.972 ********** 2026-03-09 00:47:08.269766 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:47:08.269784 | orchestrator |  "vgs_report": { 2026-03-09 00:47:08.269805 | orchestrator |  "vg": [] 2026-03-09 00:47:08.269858 | orchestrator |  } 2026-03-09 00:47:08.269872 | orchestrator | } 2026-03-09 00:47:08.269883 | orchestrator | 2026-03-09 00:47:08.269895 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-09 00:47:08.269906 | orchestrator | Monday 09 March 2026 00:47:05 +0000 (0:00:00.158) 0:01:12.130 ********** 2026-03-09 00:47:08.269916 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.269927 | orchestrator | 2026-03-09 00:47:08.269938 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-09 00:47:08.269949 | orchestrator | Monday 09 March 2026 00:47:05 +0000 (0:00:00.136) 0:01:12.267 ********** 2026-03-09 00:47:08.269960 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.269971 | orchestrator | 2026-03-09 00:47:08.269981 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-09 00:47:08.269992 | orchestrator | Monday 09 March 2026 00:47:05 +0000 (0:00:00.138) 0:01:12.405 ********** 2026-03-09 00:47:08.270003 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.270014 | orchestrator | 2026-03-09 00:47:08.270097 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-09 00:47:08.270109 | orchestrator | Monday 09 March 2026 00:47:05 +0000 (0:00:00.135) 0:01:12.540 ********** 2026-03-09 00:47:08.270120 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.270131 | orchestrator | 2026-03-09 00:47:08.270142 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-09 00:47:08.270153 | orchestrator | Monday 09 March 2026 00:47:06 +0000 (0:00:00.141) 0:01:12.682 ********** 2026-03-09 00:47:08.270164 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.270174 | orchestrator | 2026-03-09 00:47:08.270185 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-09 00:47:08.270196 | orchestrator | Monday 09 March 2026 00:47:06 +0000 (0:00:00.138) 0:01:12.820 ********** 2026-03-09 00:47:08.270207 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.270218 | orchestrator | 2026-03-09 00:47:08.270228 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-09 00:47:08.270239 | orchestrator | Monday 09 March 2026 00:47:06 +0000 (0:00:00.126) 0:01:12.947 ********** 2026-03-09 00:47:08.270250 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.270261 | orchestrator | 2026-03-09 00:47:08.270271 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-09 00:47:08.270282 | orchestrator | Monday 09 March 2026 00:47:06 +0000 (0:00:00.161) 0:01:13.108 ********** 2026-03-09 00:47:08.270293 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.270304 | orchestrator | 2026-03-09 00:47:08.270315 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-09 00:47:08.270325 | orchestrator | Monday 09 March 2026 00:47:06 +0000 (0:00:00.381) 0:01:13.490 ********** 2026-03-09 00:47:08.270336 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.270347 | orchestrator | 2026-03-09 00:47:08.270365 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-09 00:47:08.270376 | orchestrator | Monday 09 March 2026 00:47:07 +0000 (0:00:00.140) 0:01:13.630 ********** 2026-03-09 00:47:08.270387 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.270398 | orchestrator | 2026-03-09 00:47:08.270409 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-09 00:47:08.270420 | orchestrator | Monday 09 March 2026 00:47:07 +0000 (0:00:00.136) 0:01:13.766 ********** 2026-03-09 00:47:08.270443 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.270454 | orchestrator | 2026-03-09 00:47:08.270465 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-09 00:47:08.270497 | orchestrator | Monday 09 March 2026 00:47:07 +0000 (0:00:00.126) 0:01:13.893 ********** 2026-03-09 00:47:08.270508 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.270519 | orchestrator | 2026-03-09 00:47:08.270532 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-09 00:47:08.270552 | orchestrator | Monday 09 March 2026 00:47:07 +0000 (0:00:00.145) 0:01:14.038 ********** 2026-03-09 00:47:08.270571 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.270590 | orchestrator | 2026-03-09 00:47:08.270610 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-09 00:47:08.270629 | orchestrator | Monday 09 March 2026 00:47:07 +0000 (0:00:00.166) 0:01:14.205 ********** 2026-03-09 00:47:08.270647 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.270666 | orchestrator | 2026-03-09 00:47:08.270686 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-09 00:47:08.270705 | orchestrator | Monday 09 March 2026 00:47:07 +0000 (0:00:00.138) 0:01:14.343 ********** 2026-03-09 00:47:08.270724 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:08.270745 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:08.270764 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.270783 | orchestrator | 2026-03-09 00:47:08.270795 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-09 00:47:08.270806 | orchestrator | Monday 09 March 2026 00:47:07 +0000 (0:00:00.168) 0:01:14.511 ********** 2026-03-09 00:47:08.270817 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:08.270828 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:08.270839 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:08.270850 | orchestrator | 2026-03-09 00:47:08.270861 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-09 00:47:08.270871 | orchestrator | Monday 09 March 2026 00:47:08 +0000 (0:00:00.150) 0:01:14.662 ********** 2026-03-09 00:47:08.270896 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:11.325029 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:11.325108 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:11.325117 | orchestrator | 2026-03-09 00:47:11.325124 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-09 00:47:11.325131 | orchestrator | Monday 09 March 2026 00:47:08 +0000 (0:00:00.160) 0:01:14.822 ********** 2026-03-09 00:47:11.325137 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:11.325144 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:11.325149 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:11.325155 | orchestrator | 2026-03-09 00:47:11.325160 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-09 00:47:11.325166 | orchestrator | Monday 09 March 2026 00:47:08 +0000 (0:00:00.152) 0:01:14.975 ********** 2026-03-09 00:47:11.325191 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:11.325197 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:11.325202 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:11.325208 | orchestrator | 2026-03-09 00:47:11.325213 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-09 00:47:11.325219 | orchestrator | Monday 09 March 2026 00:47:08 +0000 (0:00:00.162) 0:01:15.137 ********** 2026-03-09 00:47:11.325224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:11.325230 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:11.325245 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:11.325251 | orchestrator | 2026-03-09 00:47:11.325257 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-09 00:47:11.325262 | orchestrator | Monday 09 March 2026 00:47:08 +0000 (0:00:00.397) 0:01:15.534 ********** 2026-03-09 00:47:11.325268 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:11.325273 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:11.325279 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:11.325284 | orchestrator | 2026-03-09 00:47:11.325290 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-09 00:47:11.325296 | orchestrator | Monday 09 March 2026 00:47:09 +0000 (0:00:00.163) 0:01:15.698 ********** 2026-03-09 00:47:11.325301 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:11.325307 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:11.325312 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:11.325317 | orchestrator | 2026-03-09 00:47:11.325323 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-09 00:47:11.325328 | orchestrator | Monday 09 March 2026 00:47:09 +0000 (0:00:00.149) 0:01:15.847 ********** 2026-03-09 00:47:11.325334 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:47:11.325340 | orchestrator | 2026-03-09 00:47:11.325345 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-09 00:47:11.325351 | orchestrator | Monday 09 March 2026 00:47:09 +0000 (0:00:00.546) 0:01:16.393 ********** 2026-03-09 00:47:11.325359 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:47:11.325368 | orchestrator | 2026-03-09 00:47:11.325377 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-09 00:47:11.325386 | orchestrator | Monday 09 March 2026 00:47:10 +0000 (0:00:00.510) 0:01:16.904 ********** 2026-03-09 00:47:11.325394 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:47:11.325403 | orchestrator | 2026-03-09 00:47:11.325412 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-09 00:47:11.325421 | orchestrator | Monday 09 March 2026 00:47:10 +0000 (0:00:00.149) 0:01:17.053 ********** 2026-03-09 00:47:11.325429 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'vg_name': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'}) 2026-03-09 00:47:11.325439 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'vg_name': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'}) 2026-03-09 00:47:11.325455 | orchestrator | 2026-03-09 00:47:11.325510 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-09 00:47:11.325522 | orchestrator | Monday 09 March 2026 00:47:10 +0000 (0:00:00.182) 0:01:17.236 ********** 2026-03-09 00:47:11.325546 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:11.325556 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:11.325565 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:11.325573 | orchestrator | 2026-03-09 00:47:11.325583 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-09 00:47:11.325593 | orchestrator | Monday 09 March 2026 00:47:10 +0000 (0:00:00.151) 0:01:17.387 ********** 2026-03-09 00:47:11.325604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:11.325614 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:11.325624 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:11.325634 | orchestrator | 2026-03-09 00:47:11.325644 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-09 00:47:11.325653 | orchestrator | Monday 09 March 2026 00:47:10 +0000 (0:00:00.148) 0:01:17.536 ********** 2026-03-09 00:47:11.325664 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'})  2026-03-09 00:47:11.325674 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'})  2026-03-09 00:47:11.325684 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:11.325694 | orchestrator | 2026-03-09 00:47:11.325704 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-09 00:47:11.325714 | orchestrator | Monday 09 March 2026 00:47:11 +0000 (0:00:00.169) 0:01:17.706 ********** 2026-03-09 00:47:11.325724 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:47:11.325733 | orchestrator |  "lvm_report": { 2026-03-09 00:47:11.325743 | orchestrator |  "lv": [ 2026-03-09 00:47:11.325753 | orchestrator |  { 2026-03-09 00:47:11.325763 | orchestrator |  "lv_name": "osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd", 2026-03-09 00:47:11.325778 | orchestrator |  "vg_name": "ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd" 2026-03-09 00:47:11.325788 | orchestrator |  }, 2026-03-09 00:47:11.325798 | orchestrator |  { 2026-03-09 00:47:11.325807 | orchestrator |  "lv_name": "osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd", 2026-03-09 00:47:11.325814 | orchestrator |  "vg_name": "ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd" 2026-03-09 00:47:11.325823 | orchestrator |  } 2026-03-09 00:47:11.325833 | orchestrator |  ], 2026-03-09 00:47:11.325843 | orchestrator |  "pv": [ 2026-03-09 00:47:11.325853 | orchestrator |  { 2026-03-09 00:47:11.325863 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-09 00:47:11.325873 | orchestrator |  "vg_name": "ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd" 2026-03-09 00:47:11.325883 | orchestrator |  }, 2026-03-09 00:47:11.325893 | orchestrator |  { 2026-03-09 00:47:11.325903 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-09 00:47:11.325913 | orchestrator |  "vg_name": "ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd" 2026-03-09 00:47:11.325922 | orchestrator |  } 2026-03-09 00:47:11.325932 | orchestrator |  ] 2026-03-09 00:47:11.325941 | orchestrator |  } 2026-03-09 00:47:11.325952 | orchestrator | } 2026-03-09 00:47:11.325968 | orchestrator | 2026-03-09 00:47:11.325978 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:47:11.325987 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-09 00:47:11.325997 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-09 00:47:11.326006 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-09 00:47:11.326070 | orchestrator | 2026-03-09 00:47:11.326083 | orchestrator | 2026-03-09 00:47:11.326092 | orchestrator | 2026-03-09 00:47:11.326100 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:47:11.326110 | orchestrator | Monday 09 March 2026 00:47:11 +0000 (0:00:00.146) 0:01:17.852 ********** 2026-03-09 00:47:11.326119 | orchestrator | =============================================================================== 2026-03-09 00:47:11.326128 | orchestrator | Create block VGs -------------------------------------------------------- 5.51s 2026-03-09 00:47:11.326137 | orchestrator | Create block LVs -------------------------------------------------------- 3.87s 2026-03-09 00:47:11.326146 | orchestrator | Add known partitions to the list of available block devices ------------- 2.12s 2026-03-09 00:47:11.326155 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.72s 2026-03-09 00:47:11.326164 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.67s 2026-03-09 00:47:11.326173 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.61s 2026-03-09 00:47:11.326182 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.57s 2026-03-09 00:47:11.326191 | orchestrator | Add known links to the list of available block devices ------------------ 1.48s 2026-03-09 00:47:11.326211 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.48s 2026-03-09 00:47:11.773412 | orchestrator | Add known partitions to the list of available block devices ------------- 1.44s 2026-03-09 00:47:11.773583 | orchestrator | Print LVM report data --------------------------------------------------- 1.00s 2026-03-09 00:47:11.773600 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2026-03-09 00:47:11.773612 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-03-09 00:47:11.773623 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2026-03-09 00:47:11.773634 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.80s 2026-03-09 00:47:11.773645 | orchestrator | Get initial list of available block devices ----------------------------- 0.78s 2026-03-09 00:47:11.773656 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.76s 2026-03-09 00:47:11.773667 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-03-09 00:47:11.773678 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-03-09 00:47:11.773689 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.74s 2026-03-09 00:47:24.369380 | orchestrator | 2026-03-09 00:47:24 | INFO  | Task 9aed6c5b-c56a-437a-bcf4-e0c519b54d05 (facts) was prepared for execution. 2026-03-09 00:47:24.369546 | orchestrator | 2026-03-09 00:47:24 | INFO  | It takes a moment until task 9aed6c5b-c56a-437a-bcf4-e0c519b54d05 (facts) has been started and output is visible here. 2026-03-09 00:47:37.849126 | orchestrator | 2026-03-09 00:47:37.849204 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-09 00:47:37.849211 | orchestrator | 2026-03-09 00:47:37.849217 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-09 00:47:37.849222 | orchestrator | Monday 09 March 2026 00:47:28 +0000 (0:00:00.269) 0:00:00.269 ********** 2026-03-09 00:47:37.849251 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:47:37.849257 | orchestrator | ok: [testbed-manager] 2026-03-09 00:47:37.849261 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:47:37.849265 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:47:37.849270 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:47:37.849274 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:47:37.849278 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:47:37.849282 | orchestrator | 2026-03-09 00:47:37.849287 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-09 00:47:37.849291 | orchestrator | Monday 09 March 2026 00:47:29 +0000 (0:00:01.183) 0:00:01.452 ********** 2026-03-09 00:47:37.849297 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:47:37.849302 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:47:37.849306 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:47:37.849311 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:47:37.849315 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:47:37.849319 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:47:37.849323 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:37.849327 | orchestrator | 2026-03-09 00:47:37.849332 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-09 00:47:37.849337 | orchestrator | 2026-03-09 00:47:37.849344 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:47:37.849350 | orchestrator | Monday 09 March 2026 00:47:31 +0000 (0:00:01.457) 0:00:02.910 ********** 2026-03-09 00:47:37.849357 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:47:37.849363 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:47:37.849370 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:47:37.849376 | orchestrator | ok: [testbed-manager] 2026-03-09 00:47:37.849383 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:47:37.849389 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:47:37.849396 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:47:37.849403 | orchestrator | 2026-03-09 00:47:37.849410 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-09 00:47:37.849417 | orchestrator | 2026-03-09 00:47:37.849423 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-09 00:47:37.849430 | orchestrator | Monday 09 March 2026 00:47:36 +0000 (0:00:05.382) 0:00:08.293 ********** 2026-03-09 00:47:37.849438 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:47:37.849497 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:47:37.849503 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:47:37.849510 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:47:37.849515 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:47:37.849519 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:47:37.849523 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:47:37.849527 | orchestrator | 2026-03-09 00:47:37.849531 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:47:37.849536 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:47:37.849542 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:47:37.849546 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:47:37.849550 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:47:37.849554 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:47:37.849558 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:47:37.849563 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:47:37.849573 | orchestrator | 2026-03-09 00:47:37.849577 | orchestrator | 2026-03-09 00:47:37.849581 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:47:37.849585 | orchestrator | Monday 09 March 2026 00:47:37 +0000 (0:00:00.616) 0:00:08.910 ********** 2026-03-09 00:47:37.849590 | orchestrator | =============================================================================== 2026-03-09 00:47:37.849594 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.38s 2026-03-09 00:47:37.849598 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.46s 2026-03-09 00:47:37.849602 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.18s 2026-03-09 00:47:37.849606 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2026-03-09 00:47:51.920707 | orchestrator | 2026-03-09 00:47:51 | INFO  | Task 2c989bca-25c8-4d02-aaa1-1b0127a3b1c9 (frr) was prepared for execution. 2026-03-09 00:47:51.920795 | orchestrator | 2026-03-09 00:47:51 | INFO  | It takes a moment until task 2c989bca-25c8-4d02-aaa1-1b0127a3b1c9 (frr) has been started and output is visible here. 2026-03-09 00:48:21.907697 | orchestrator | 2026-03-09 00:48:21.907810 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-09 00:48:21.907827 | orchestrator | 2026-03-09 00:48:21.907840 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-09 00:48:21.907869 | orchestrator | Monday 09 March 2026 00:47:57 +0000 (0:00:00.291) 0:00:00.291 ********** 2026-03-09 00:48:21.907881 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:48:21.907894 | orchestrator | 2026-03-09 00:48:21.907905 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-09 00:48:21.907917 | orchestrator | Monday 09 March 2026 00:47:57 +0000 (0:00:00.253) 0:00:00.544 ********** 2026-03-09 00:48:21.907928 | orchestrator | changed: [testbed-manager] 2026-03-09 00:48:21.907940 | orchestrator | 2026-03-09 00:48:21.907978 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-09 00:48:21.907990 | orchestrator | Monday 09 March 2026 00:47:58 +0000 (0:00:01.314) 0:00:01.859 ********** 2026-03-09 00:48:21.908006 | orchestrator | changed: [testbed-manager] 2026-03-09 00:48:21.908018 | orchestrator | 2026-03-09 00:48:21.908029 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-09 00:48:21.908040 | orchestrator | Monday 09 March 2026 00:48:10 +0000 (0:00:11.519) 0:00:13.378 ********** 2026-03-09 00:48:21.908064 | orchestrator | ok: [testbed-manager] 2026-03-09 00:48:21.908077 | orchestrator | 2026-03-09 00:48:21.908088 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-09 00:48:21.908100 | orchestrator | Monday 09 March 2026 00:48:11 +0000 (0:00:01.265) 0:00:14.643 ********** 2026-03-09 00:48:21.908111 | orchestrator | changed: [testbed-manager] 2026-03-09 00:48:21.908122 | orchestrator | 2026-03-09 00:48:21.908133 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-09 00:48:21.908144 | orchestrator | Monday 09 March 2026 00:48:12 +0000 (0:00:01.064) 0:00:15.708 ********** 2026-03-09 00:48:21.908155 | orchestrator | ok: [testbed-manager] 2026-03-09 00:48:21.908166 | orchestrator | 2026-03-09 00:48:21.908178 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-09 00:48:21.908189 | orchestrator | Monday 09 March 2026 00:48:14 +0000 (0:00:01.365) 0:00:17.074 ********** 2026-03-09 00:48:21.908204 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:48:21.908222 | orchestrator | 2026-03-09 00:48:21.908242 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-09 00:48:21.908259 | orchestrator | Monday 09 March 2026 00:48:14 +0000 (0:00:00.137) 0:00:17.212 ********** 2026-03-09 00:48:21.908279 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:48:21.908327 | orchestrator | 2026-03-09 00:48:21.908346 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-09 00:48:21.908366 | orchestrator | Monday 09 March 2026 00:48:14 +0000 (0:00:00.246) 0:00:17.458 ********** 2026-03-09 00:48:21.908385 | orchestrator | changed: [testbed-manager] 2026-03-09 00:48:21.908405 | orchestrator | 2026-03-09 00:48:21.908466 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-09 00:48:21.908480 | orchestrator | Monday 09 March 2026 00:48:15 +0000 (0:00:01.211) 0:00:18.669 ********** 2026-03-09 00:48:21.908494 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-09 00:48:21.908506 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-09 00:48:21.908521 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-09 00:48:21.908534 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-09 00:48:21.908547 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-09 00:48:21.908560 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-09 00:48:21.908571 | orchestrator | 2026-03-09 00:48:21.908582 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-09 00:48:21.908593 | orchestrator | Monday 09 March 2026 00:48:18 +0000 (0:00:02.457) 0:00:21.126 ********** 2026-03-09 00:48:21.908604 | orchestrator | ok: [testbed-manager] 2026-03-09 00:48:21.908614 | orchestrator | 2026-03-09 00:48:21.908625 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-09 00:48:21.908636 | orchestrator | Monday 09 March 2026 00:48:20 +0000 (0:00:01.928) 0:00:23.054 ********** 2026-03-09 00:48:21.908646 | orchestrator | changed: [testbed-manager] 2026-03-09 00:48:21.908657 | orchestrator | 2026-03-09 00:48:21.908668 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:48:21.908679 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:48:21.908691 | orchestrator | 2026-03-09 00:48:21.908702 | orchestrator | 2026-03-09 00:48:21.908712 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:48:21.908723 | orchestrator | Monday 09 March 2026 00:48:21 +0000 (0:00:01.501) 0:00:24.556 ********** 2026-03-09 00:48:21.908734 | orchestrator | =============================================================================== 2026-03-09 00:48:21.908745 | orchestrator | osism.services.frr : Install frr package ------------------------------- 11.52s 2026-03-09 00:48:21.908755 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.46s 2026-03-09 00:48:21.908766 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.93s 2026-03-09 00:48:21.908776 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.50s 2026-03-09 00:48:21.908787 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.37s 2026-03-09 00:48:21.908816 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.31s 2026-03-09 00:48:21.908828 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.27s 2026-03-09 00:48:21.908839 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.21s 2026-03-09 00:48:21.908850 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.06s 2026-03-09 00:48:21.908861 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.25s 2026-03-09 00:48:21.908871 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.25s 2026-03-09 00:48:21.908882 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-03-09 00:48:22.267082 | orchestrator | 2026-03-09 00:48:22.269673 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Mar 9 00:48:22 UTC 2026 2026-03-09 00:48:22.269724 | orchestrator | 2026-03-09 00:48:24.375926 | orchestrator | 2026-03-09 00:48:24 | INFO  | Collection nutshell is prepared for execution 2026-03-09 00:48:24.376042 | orchestrator | 2026-03-09 00:48:24 | INFO  | A [0] - dotfiles 2026-03-09 00:48:34.420970 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [0] - homer 2026-03-09 00:48:34.421064 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [0] - netdata 2026-03-09 00:48:34.421075 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [0] - openstackclient 2026-03-09 00:48:34.421083 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [0] - phpmyadmin 2026-03-09 00:48:34.421090 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [0] - common 2026-03-09 00:48:34.422440 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [1] -- loadbalancer 2026-03-09 00:48:34.422496 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [2] --- opensearch 2026-03-09 00:48:34.422508 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [2] --- mariadb-ng 2026-03-09 00:48:34.422793 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [3] ---- horizon 2026-03-09 00:48:34.422814 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [3] ---- keystone 2026-03-09 00:48:34.422821 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [4] ----- neutron 2026-03-09 00:48:34.422829 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [5] ------ wait-for-nova 2026-03-09 00:48:34.423051 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [6] ------- octavia 2026-03-09 00:48:34.424574 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [4] ----- barbican 2026-03-09 00:48:34.424618 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [4] ----- designate 2026-03-09 00:48:34.424628 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [4] ----- ironic 2026-03-09 00:48:34.424635 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [4] ----- placement 2026-03-09 00:48:34.424642 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [4] ----- magnum 2026-03-09 00:48:34.425379 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [1] -- openvswitch 2026-03-09 00:48:34.425465 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [2] --- ovn 2026-03-09 00:48:34.425807 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [1] -- memcached 2026-03-09 00:48:34.425837 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [1] -- redis 2026-03-09 00:48:34.425905 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [1] -- rabbitmq-ng 2026-03-09 00:48:34.426488 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [0] - kubernetes 2026-03-09 00:48:34.429133 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [1] -- kubeconfig 2026-03-09 00:48:34.429172 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [1] -- copy-kubeconfig 2026-03-09 00:48:34.429182 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [0] - ceph 2026-03-09 00:48:34.430924 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [1] -- ceph-pools 2026-03-09 00:48:34.430964 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [2] --- copy-ceph-keys 2026-03-09 00:48:34.430982 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [3] ---- cephclient 2026-03-09 00:48:34.431238 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-09 00:48:34.431695 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [4] ----- wait-for-keystone 2026-03-09 00:48:34.431730 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-09 00:48:34.431985 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [5] ------ glance 2026-03-09 00:48:34.432086 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [5] ------ cinder 2026-03-09 00:48:34.432349 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [5] ------ nova 2026-03-09 00:48:34.433217 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [4] ----- prometheus 2026-03-09 00:48:34.433253 | orchestrator | 2026-03-09 00:48:34 | INFO  | A [5] ------ grafana 2026-03-09 00:48:34.661348 | orchestrator | 2026-03-09 00:48:34 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-09 00:48:34.661493 | orchestrator | 2026-03-09 00:48:34 | INFO  | Tasks are running in the background 2026-03-09 00:48:38.285364 | orchestrator | 2026-03-09 00:48:38 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-09 00:48:40.425841 | orchestrator | 2026-03-09 00:48:40 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:48:40.426484 | orchestrator | 2026-03-09 00:48:40 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:48:40.428852 | orchestrator | 2026-03-09 00:48:40 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:48:40.433140 | orchestrator | 2026-03-09 00:48:40 | INFO  | Task 77a19988-2298-48ea-b0c7-f0089533f0cb is in state STARTED 2026-03-09 00:48:40.434086 | orchestrator | 2026-03-09 00:48:40 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:48:40.435094 | orchestrator | 2026-03-09 00:48:40 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:48:40.435876 | orchestrator | 2026-03-09 00:48:40 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:48:40.436138 | orchestrator | 2026-03-09 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:43.493736 | orchestrator | 2026-03-09 00:48:43 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:48:43.504715 | orchestrator | 2026-03-09 00:48:43 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:48:43.515073 | orchestrator | 2026-03-09 00:48:43 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:48:43.527231 | orchestrator | 2026-03-09 00:48:43 | INFO  | Task 77a19988-2298-48ea-b0c7-f0089533f0cb is in state STARTED 2026-03-09 00:48:43.551018 | orchestrator | 2026-03-09 00:48:43 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:48:43.560671 | orchestrator | 2026-03-09 00:48:43 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:48:43.561177 | orchestrator | 2026-03-09 00:48:43 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:48:43.561222 | orchestrator | 2026-03-09 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:46.686339 | orchestrator | 2026-03-09 00:48:46 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:48:46.687053 | orchestrator | 2026-03-09 00:48:46 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:48:46.689507 | orchestrator | 2026-03-09 00:48:46 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:48:46.689796 | orchestrator | 2026-03-09 00:48:46 | INFO  | Task 77a19988-2298-48ea-b0c7-f0089533f0cb is in state STARTED 2026-03-09 00:48:46.692654 | orchestrator | 2026-03-09 00:48:46 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:48:46.693802 | orchestrator | 2026-03-09 00:48:46 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:48:46.694551 | orchestrator | 2026-03-09 00:48:46 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:48:46.694622 | orchestrator | 2026-03-09 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:49.757350 | orchestrator | 2026-03-09 00:48:49 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:48:49.757584 | orchestrator | 2026-03-09 00:48:49 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:48:49.758265 | orchestrator | 2026-03-09 00:48:49 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:48:49.759278 | orchestrator | 2026-03-09 00:48:49 | INFO  | Task 77a19988-2298-48ea-b0c7-f0089533f0cb is in state STARTED 2026-03-09 00:48:49.759533 | orchestrator | 2026-03-09 00:48:49 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:48:49.760245 | orchestrator | 2026-03-09 00:48:49 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:48:49.760871 | orchestrator | 2026-03-09 00:48:49 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:48:49.760895 | orchestrator | 2026-03-09 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:52.808854 | orchestrator | 2026-03-09 00:48:52 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:48:52.809464 | orchestrator | 2026-03-09 00:48:52 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:48:52.811703 | orchestrator | 2026-03-09 00:48:52 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:48:52.812612 | orchestrator | 2026-03-09 00:48:52 | INFO  | Task 77a19988-2298-48ea-b0c7-f0089533f0cb is in state STARTED 2026-03-09 00:48:52.814695 | orchestrator | 2026-03-09 00:48:52 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:48:52.815480 | orchestrator | 2026-03-09 00:48:52 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:48:52.819238 | orchestrator | 2026-03-09 00:48:52 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:48:52.819291 | orchestrator | 2026-03-09 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:55.980910 | orchestrator | 2026-03-09 00:48:55 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:48:55.981011 | orchestrator | 2026-03-09 00:48:55 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:48:55.981028 | orchestrator | 2026-03-09 00:48:55 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:48:55.981040 | orchestrator | 2026-03-09 00:48:55 | INFO  | Task 77a19988-2298-48ea-b0c7-f0089533f0cb is in state STARTED 2026-03-09 00:48:55.981052 | orchestrator | 2026-03-09 00:48:55 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:48:55.981063 | orchestrator | 2026-03-09 00:48:55 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:48:55.981074 | orchestrator | 2026-03-09 00:48:55 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:48:55.981086 | orchestrator | 2026-03-09 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:59.407078 | orchestrator | 2026-03-09 00:48:59 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:48:59.431472 | orchestrator | 2026-03-09 00:48:59 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:48:59.466193 | orchestrator | 2026-03-09 00:48:59 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:48:59.466286 | orchestrator | 2026-03-09 00:48:59 | INFO  | Task 77a19988-2298-48ea-b0c7-f0089533f0cb is in state STARTED 2026-03-09 00:48:59.466303 | orchestrator | 2026-03-09 00:48:59 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:48:59.466315 | orchestrator | 2026-03-09 00:48:59 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:48:59.466326 | orchestrator | 2026-03-09 00:48:59 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:48:59.466335 | orchestrator | 2026-03-09 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:02.574758 | orchestrator | 2026-03-09 00:49:02 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:49:02.577461 | orchestrator | 2026-03-09 00:49:02 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:02.581023 | orchestrator | 2026-03-09 00:49:02 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:02.581198 | orchestrator | 2026-03-09 00:49:02 | INFO  | Task 77a19988-2298-48ea-b0c7-f0089533f0cb is in state STARTED 2026-03-09 00:49:02.583294 | orchestrator | 2026-03-09 00:49:02 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:02.584273 | orchestrator | 2026-03-09 00:49:02 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:02.587465 | orchestrator | 2026-03-09 00:49:02 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:02.587507 | orchestrator | 2026-03-09 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:05.925422 | orchestrator | 2026-03-09 00:49:05 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:49:05.925518 | orchestrator | 2026-03-09 00:49:05 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:05.925527 | orchestrator | 2026-03-09 00:49:05 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:05.925534 | orchestrator | 2026-03-09 00:49:05 | INFO  | Task 77a19988-2298-48ea-b0c7-f0089533f0cb is in state STARTED 2026-03-09 00:49:05.931062 | orchestrator | 2026-03-09 00:49:05 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:05.931876 | orchestrator | 2026-03-09 00:49:05 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:05.940273 | orchestrator | 2026-03-09 00:49:05 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:05.940357 | orchestrator | 2026-03-09 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:09.128204 | orchestrator | 2026-03-09 00:49:09 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:49:09.128368 | orchestrator | 2026-03-09 00:49:09 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:09.129650 | orchestrator | 2026-03-09 00:49:09 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:09.130539 | orchestrator | 2026-03-09 00:49:09 | INFO  | Task 77a19988-2298-48ea-b0c7-f0089533f0cb is in state STARTED 2026-03-09 00:49:09.131444 | orchestrator | 2026-03-09 00:49:09 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:09.132602 | orchestrator | 2026-03-09 00:49:09 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:09.133593 | orchestrator | 2026-03-09 00:49:09 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:09.133653 | orchestrator | 2026-03-09 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:12.599917 | orchestrator | 2026-03-09 00:49:12.600005 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-09 00:49:12.600017 | orchestrator | 2026-03-09 00:49:12.600026 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-09 00:49:12.600034 | orchestrator | Monday 09 March 2026 00:48:53 +0000 (0:00:01.339) 0:00:01.339 ********** 2026-03-09 00:49:12.600043 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:49:12.600052 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:49:12.600060 | orchestrator | changed: [testbed-manager] 2026-03-09 00:49:12.600068 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:49:12.600076 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:49:12.600083 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:49:12.600091 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:49:12.600099 | orchestrator | 2026-03-09 00:49:12.600107 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-09 00:49:12.600115 | orchestrator | Monday 09 March 2026 00:48:58 +0000 (0:00:04.889) 0:00:06.228 ********** 2026-03-09 00:49:12.600124 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-09 00:49:12.600132 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-09 00:49:12.600140 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-09 00:49:12.600148 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-09 00:49:12.600156 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-09 00:49:12.600164 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-09 00:49:12.600171 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-09 00:49:12.600179 | orchestrator | 2026-03-09 00:49:12.600187 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-09 00:49:12.600196 | orchestrator | Monday 09 March 2026 00:49:01 +0000 (0:00:02.538) 0:00:08.767 ********** 2026-03-09 00:49:12.600207 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:49:00.821910', 'end': '2026-03-09 00:49:00.828585', 'delta': '0:00:00.006675', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:49:12.600550 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:49:00.773881', 'end': '2026-03-09 00:49:00.782326', 'delta': '0:00:00.008445', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:49:12.600568 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:49:00.786951', 'end': '2026-03-09 00:49:00.790913', 'delta': '0:00:00.003962', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:49:12.600628 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:49:00.812448', 'end': '2026-03-09 00:49:00.818094', 'delta': '0:00:00.005646', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:49:12.600645 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:49:00.783488', 'end': '2026-03-09 00:49:00.788322', 'delta': '0:00:00.004834', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:49:12.600656 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:49:01.074543', 'end': '2026-03-09 00:49:01.081132', 'delta': '0:00:00.006589', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:49:12.600665 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:49:00.780137', 'end': '2026-03-09 00:49:00.784580', 'delta': '0:00:00.004443', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:49:12.600674 | orchestrator | 2026-03-09 00:49:12.600682 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-09 00:49:12.600698 | orchestrator | Monday 09 March 2026 00:49:04 +0000 (0:00:03.205) 0:00:11.972 ********** 2026-03-09 00:49:12.600707 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-09 00:49:12.600715 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-09 00:49:12.600723 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-09 00:49:12.600731 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-09 00:49:12.600739 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-09 00:49:12.600747 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-09 00:49:12.600755 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-09 00:49:12.600763 | orchestrator | 2026-03-09 00:49:12.600771 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-09 00:49:12.600779 | orchestrator | Monday 09 March 2026 00:49:06 +0000 (0:00:02.144) 0:00:14.116 ********** 2026-03-09 00:49:12.600787 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-09 00:49:12.600795 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-09 00:49:12.600803 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-09 00:49:12.600811 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-09 00:49:12.600819 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-09 00:49:12.600827 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-09 00:49:12.600835 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-09 00:49:12.600843 | orchestrator | 2026-03-09 00:49:12.600852 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:49:12.600866 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:12.600876 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:12.600884 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:12.600892 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:12.600900 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:12.600912 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:12.600920 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:49:12.600928 | orchestrator | 2026-03-09 00:49:12.600936 | orchestrator | 2026-03-09 00:49:12.600944 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:49:12.600952 | orchestrator | Monday 09 March 2026 00:49:10 +0000 (0:00:03.445) 0:00:17.561 ********** 2026-03-09 00:49:12.600960 | orchestrator | =============================================================================== 2026-03-09 00:49:12.600969 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.89s 2026-03-09 00:49:12.600977 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.45s 2026-03-09 00:49:12.600985 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.21s 2026-03-09 00:49:12.600993 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.54s 2026-03-09 00:49:12.601001 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.14s 2026-03-09 00:49:12.601009 | orchestrator | 2026-03-09 00:49:12 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:49:12.601022 | orchestrator | 2026-03-09 00:49:12 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:12.601031 | orchestrator | 2026-03-09 00:49:12 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:12.601039 | orchestrator | 2026-03-09 00:49:12 | INFO  | Task 77a19988-2298-48ea-b0c7-f0089533f0cb is in state SUCCESS 2026-03-09 00:49:12.601047 | orchestrator | 2026-03-09 00:49:12 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:12.601055 | orchestrator | 2026-03-09 00:49:12 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:12.601063 | orchestrator | 2026-03-09 00:49:12 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:12.601071 | orchestrator | 2026-03-09 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:16.169309 | orchestrator | 2026-03-09 00:49:15 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:49:16.169415 | orchestrator | 2026-03-09 00:49:15 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:16.169423 | orchestrator | 2026-03-09 00:49:15 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:16.169427 | orchestrator | 2026-03-09 00:49:15 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:16.169431 | orchestrator | 2026-03-09 00:49:15 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:16.169436 | orchestrator | 2026-03-09 00:49:15 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:49:16.169440 | orchestrator | 2026-03-09 00:49:15 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:16.169444 | orchestrator | 2026-03-09 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:18.829327 | orchestrator | 2026-03-09 00:49:18 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:49:18.829520 | orchestrator | 2026-03-09 00:49:18 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:18.829537 | orchestrator | 2026-03-09 00:49:18 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:18.829550 | orchestrator | 2026-03-09 00:49:18 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:18.829561 | orchestrator | 2026-03-09 00:49:18 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:18.829572 | orchestrator | 2026-03-09 00:49:18 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:49:18.829583 | orchestrator | 2026-03-09 00:49:18 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:18.829594 | orchestrator | 2026-03-09 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:21.940479 | orchestrator | 2026-03-09 00:49:21 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:49:21.940606 | orchestrator | 2026-03-09 00:49:21 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:21.940625 | orchestrator | 2026-03-09 00:49:21 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:21.940657 | orchestrator | 2026-03-09 00:49:21 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:21.941307 | orchestrator | 2026-03-09 00:49:21 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:21.941350 | orchestrator | 2026-03-09 00:49:21 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:49:21.941357 | orchestrator | 2026-03-09 00:49:21 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:21.941388 | orchestrator | 2026-03-09 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:24.991928 | orchestrator | 2026-03-09 00:49:24 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:49:24.993542 | orchestrator | 2026-03-09 00:49:24 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:24.995110 | orchestrator | 2026-03-09 00:49:24 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:24.996031 | orchestrator | 2026-03-09 00:49:24 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:24.997224 | orchestrator | 2026-03-09 00:49:24 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:25.000321 | orchestrator | 2026-03-09 00:49:24 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:49:25.000423 | orchestrator | 2026-03-09 00:49:24 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:25.000433 | orchestrator | 2026-03-09 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:28.109037 | orchestrator | 2026-03-09 00:49:28 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:49:28.139771 | orchestrator | 2026-03-09 00:49:28 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:28.164988 | orchestrator | 2026-03-09 00:49:28 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:28.165065 | orchestrator | 2026-03-09 00:49:28 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:28.165075 | orchestrator | 2026-03-09 00:49:28 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:28.165082 | orchestrator | 2026-03-09 00:49:28 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:49:28.165089 | orchestrator | 2026-03-09 00:49:28 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:28.165096 | orchestrator | 2026-03-09 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:31.239694 | orchestrator | 2026-03-09 00:49:31 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:49:31.247594 | orchestrator | 2026-03-09 00:49:31 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:31.256218 | orchestrator | 2026-03-09 00:49:31 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:31.262653 | orchestrator | 2026-03-09 00:49:31 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:31.281297 | orchestrator | 2026-03-09 00:49:31 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:31.283660 | orchestrator | 2026-03-09 00:49:31 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:49:31.301212 | orchestrator | 2026-03-09 00:49:31 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:31.301284 | orchestrator | 2026-03-09 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:34.818864 | orchestrator | 2026-03-09 00:49:34 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:49:34.820315 | orchestrator | 2026-03-09 00:49:34 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:34.820437 | orchestrator | 2026-03-09 00:49:34 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:34.820454 | orchestrator | 2026-03-09 00:49:34 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:34.820927 | orchestrator | 2026-03-09 00:49:34 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:34.823827 | orchestrator | 2026-03-09 00:49:34 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:49:34.827195 | orchestrator | 2026-03-09 00:49:34 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:34.827239 | orchestrator | 2026-03-09 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:37.990544 | orchestrator | 2026-03-09 00:49:37 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:49:37.990754 | orchestrator | 2026-03-09 00:49:37 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:37.990822 | orchestrator | 2026-03-09 00:49:37 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:37.993413 | orchestrator | 2026-03-09 00:49:37 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:38.000576 | orchestrator | 2026-03-09 00:49:37 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:38.012436 | orchestrator | 2026-03-09 00:49:38 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:49:38.038486 | orchestrator | 2026-03-09 00:49:38 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:38.038941 | orchestrator | 2026-03-09 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:41.395345 | orchestrator | 2026-03-09 00:49:41 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:49:41.395552 | orchestrator | 2026-03-09 00:49:41 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:41.395573 | orchestrator | 2026-03-09 00:49:41 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:41.395592 | orchestrator | 2026-03-09 00:49:41 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:41.395605 | orchestrator | 2026-03-09 00:49:41 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:41.395615 | orchestrator | 2026-03-09 00:49:41 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:49:41.395625 | orchestrator | 2026-03-09 00:49:41 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:41.395636 | orchestrator | 2026-03-09 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:44.258689 | orchestrator | 2026-03-09 00:49:44 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:49:44.258820 | orchestrator | 2026-03-09 00:49:44 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:44.263083 | orchestrator | 2026-03-09 00:49:44 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:44.264039 | orchestrator | 2026-03-09 00:49:44 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:44.265506 | orchestrator | 2026-03-09 00:49:44 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:44.270317 | orchestrator | 2026-03-09 00:49:44 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:49:44.275160 | orchestrator | 2026-03-09 00:49:44 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:44.275213 | orchestrator | 2026-03-09 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:47.582211 | orchestrator | 2026-03-09 00:49:47 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state STARTED 2026-03-09 00:49:47.582296 | orchestrator | 2026-03-09 00:49:47 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:47.582306 | orchestrator | 2026-03-09 00:49:47 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:47.591267 | orchestrator | 2026-03-09 00:49:47 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:47.608854 | orchestrator | 2026-03-09 00:49:47 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:47.613426 | orchestrator | 2026-03-09 00:49:47 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:49:47.614382 | orchestrator | 2026-03-09 00:49:47 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:47.618626 | orchestrator | 2026-03-09 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:50.705592 | orchestrator | 2026-03-09 00:49:50 | INFO  | Task e5eb73cf-7c06-4f35-94f9-bdf707cdb590 is in state SUCCESS 2026-03-09 00:49:50.707112 | orchestrator | 2026-03-09 00:49:50 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:50.708922 | orchestrator | 2026-03-09 00:49:50 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:50.710653 | orchestrator | 2026-03-09 00:49:50 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:50.711848 | orchestrator | 2026-03-09 00:49:50 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:50.712792 | orchestrator | 2026-03-09 00:49:50 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:49:50.714933 | orchestrator | 2026-03-09 00:49:50 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:50.714980 | orchestrator | 2026-03-09 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:54.276388 | orchestrator | 2026-03-09 00:49:53 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:54.276515 | orchestrator | 2026-03-09 00:49:53 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:54.276540 | orchestrator | 2026-03-09 00:49:53 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:54.276557 | orchestrator | 2026-03-09 00:49:53 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:54.276572 | orchestrator | 2026-03-09 00:49:53 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:49:54.276586 | orchestrator | 2026-03-09 00:49:53 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:54.276601 | orchestrator | 2026-03-09 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:57.025488 | orchestrator | 2026-03-09 00:49:56 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:49:57.025562 | orchestrator | 2026-03-09 00:49:56 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:49:57.025568 | orchestrator | 2026-03-09 00:49:56 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:49:57.025589 | orchestrator | 2026-03-09 00:49:56 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:49:57.025594 | orchestrator | 2026-03-09 00:49:56 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:49:57.025598 | orchestrator | 2026-03-09 00:49:56 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:49:57.025602 | orchestrator | 2026-03-09 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:00.219696 | orchestrator | 2026-03-09 00:50:00 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:00.220495 | orchestrator | 2026-03-09 00:50:00 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:00.221437 | orchestrator | 2026-03-09 00:50:00 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state STARTED 2026-03-09 00:50:00.224505 | orchestrator | 2026-03-09 00:50:00 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:00.231071 | orchestrator | 2026-03-09 00:50:00 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:50:00.299935 | orchestrator | 2026-03-09 00:50:00 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:00.300027 | orchestrator | 2026-03-09 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:03.288105 | orchestrator | 2026-03-09 00:50:03 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:03.288471 | orchestrator | 2026-03-09 00:50:03 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:03.289164 | orchestrator | 2026-03-09 00:50:03 | INFO  | Task 5632bbfa-ff04-4325-8b7f-65403d5720df is in state SUCCESS 2026-03-09 00:50:03.290604 | orchestrator | 2026-03-09 00:50:03 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:03.293469 | orchestrator | 2026-03-09 00:50:03 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:50:03.293505 | orchestrator | 2026-03-09 00:50:03 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:03.293510 | orchestrator | 2026-03-09 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:06.371292 | orchestrator | 2026-03-09 00:50:06 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:06.371632 | orchestrator | 2026-03-09 00:50:06 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:06.372536 | orchestrator | 2026-03-09 00:50:06 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:06.375405 | orchestrator | 2026-03-09 00:50:06 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:50:06.376567 | orchestrator | 2026-03-09 00:50:06 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:06.376609 | orchestrator | 2026-03-09 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:09.483589 | orchestrator | 2026-03-09 00:50:09 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:09.484169 | orchestrator | 2026-03-09 00:50:09 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:09.485746 | orchestrator | 2026-03-09 00:50:09 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:09.487502 | orchestrator | 2026-03-09 00:50:09 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:50:09.488545 | orchestrator | 2026-03-09 00:50:09 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:09.488665 | orchestrator | 2026-03-09 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:12.536861 | orchestrator | 2026-03-09 00:50:12 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:12.538107 | orchestrator | 2026-03-09 00:50:12 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:12.539467 | orchestrator | 2026-03-09 00:50:12 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:12.541551 | orchestrator | 2026-03-09 00:50:12 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:50:12.542594 | orchestrator | 2026-03-09 00:50:12 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:12.542633 | orchestrator | 2026-03-09 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:15.602987 | orchestrator | 2026-03-09 00:50:15 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:15.604126 | orchestrator | 2026-03-09 00:50:15 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:15.606175 | orchestrator | 2026-03-09 00:50:15 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:15.609462 | orchestrator | 2026-03-09 00:50:15 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:50:15.610907 | orchestrator | 2026-03-09 00:50:15 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:15.611779 | orchestrator | 2026-03-09 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:18.699742 | orchestrator | 2026-03-09 00:50:18 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:18.702178 | orchestrator | 2026-03-09 00:50:18 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:18.707078 | orchestrator | 2026-03-09 00:50:18 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:18.707253 | orchestrator | 2026-03-09 00:50:18 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:50:18.709986 | orchestrator | 2026-03-09 00:50:18 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:18.710102 | orchestrator | 2026-03-09 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:21.902109 | orchestrator | 2026-03-09 00:50:21 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:21.909940 | orchestrator | 2026-03-09 00:50:21 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:21.917967 | orchestrator | 2026-03-09 00:50:21 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:21.925965 | orchestrator | 2026-03-09 00:50:21 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:50:21.928736 | orchestrator | 2026-03-09 00:50:21 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:21.931370 | orchestrator | 2026-03-09 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:25.074404 | orchestrator | 2026-03-09 00:50:25 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:25.074502 | orchestrator | 2026-03-09 00:50:25 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:25.085278 | orchestrator | 2026-03-09 00:50:25 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:25.086883 | orchestrator | 2026-03-09 00:50:25 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:50:25.096294 | orchestrator | 2026-03-09 00:50:25 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:25.096406 | orchestrator | 2026-03-09 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:28.133199 | orchestrator | 2026-03-09 00:50:28 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:28.134383 | orchestrator | 2026-03-09 00:50:28 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:28.140778 | orchestrator | 2026-03-09 00:50:28 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:28.143522 | orchestrator | 2026-03-09 00:50:28 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:50:28.144390 | orchestrator | 2026-03-09 00:50:28 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:28.144598 | orchestrator | 2026-03-09 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:31.306762 | orchestrator | 2026-03-09 00:50:31 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:31.311530 | orchestrator | 2026-03-09 00:50:31 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:31.311617 | orchestrator | 2026-03-09 00:50:31 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:31.317065 | orchestrator | 2026-03-09 00:50:31 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:50:31.318922 | orchestrator | 2026-03-09 00:50:31 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:31.318987 | orchestrator | 2026-03-09 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:34.440703 | orchestrator | 2026-03-09 00:50:34 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:34.441365 | orchestrator | 2026-03-09 00:50:34 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:34.442381 | orchestrator | 2026-03-09 00:50:34 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:34.443194 | orchestrator | 2026-03-09 00:50:34 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state STARTED 2026-03-09 00:50:34.443747 | orchestrator | 2026-03-09 00:50:34 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:34.443781 | orchestrator | 2026-03-09 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:37.650452 | orchestrator | 2026-03-09 00:50:37 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:37.659741 | orchestrator | 2026-03-09 00:50:37 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:37.663439 | orchestrator | 2026-03-09 00:50:37 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:37.665563 | orchestrator | 2026-03-09 00:50:37 | INFO  | Task 49c432ad-7a94-4f7f-a3c4-139f9122a5e0 is in state SUCCESS 2026-03-09 00:50:37.665607 | orchestrator | 2026-03-09 00:50:37.665615 | orchestrator | 2026-03-09 00:50:37.665622 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-09 00:50:37.665629 | orchestrator | 2026-03-09 00:50:37.665636 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-09 00:50:37.665662 | orchestrator | Monday 09 March 2026 00:48:54 +0000 (0:00:01.241) 0:00:01.241 ********** 2026-03-09 00:50:37.665669 | orchestrator | ok: [testbed-manager] => { 2026-03-09 00:50:37.665677 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-09 00:50:37.665685 | orchestrator | } 2026-03-09 00:50:37.665692 | orchestrator | 2026-03-09 00:50:37.665698 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-09 00:50:37.665704 | orchestrator | Monday 09 March 2026 00:48:54 +0000 (0:00:00.283) 0:00:01.525 ********** 2026-03-09 00:50:37.665710 | orchestrator | ok: [testbed-manager] 2026-03-09 00:50:37.665717 | orchestrator | 2026-03-09 00:50:37.665722 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-09 00:50:37.665728 | orchestrator | Monday 09 March 2026 00:48:57 +0000 (0:00:02.232) 0:00:03.758 ********** 2026-03-09 00:50:37.665745 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-09 00:50:37.665751 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-09 00:50:37.665757 | orchestrator | 2026-03-09 00:50:37.665763 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-09 00:50:37.665769 | orchestrator | Monday 09 March 2026 00:48:59 +0000 (0:00:02.828) 0:00:06.586 ********** 2026-03-09 00:50:37.665775 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:37.665781 | orchestrator | 2026-03-09 00:50:37.665787 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-09 00:50:37.665793 | orchestrator | Monday 09 March 2026 00:49:04 +0000 (0:00:04.784) 0:00:11.371 ********** 2026-03-09 00:50:37.665799 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:37.665804 | orchestrator | 2026-03-09 00:50:37.665810 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-09 00:50:37.665816 | orchestrator | Monday 09 March 2026 00:49:07 +0000 (0:00:03.137) 0:00:14.508 ********** 2026-03-09 00:50:37.665822 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-09 00:50:37.665828 | orchestrator | ok: [testbed-manager] 2026-03-09 00:50:37.665833 | orchestrator | 2026-03-09 00:50:37.665839 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-09 00:50:37.665845 | orchestrator | Monday 09 March 2026 00:49:39 +0000 (0:00:31.891) 0:00:46.400 ********** 2026-03-09 00:50:37.665851 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:37.665857 | orchestrator | 2026-03-09 00:50:37.665862 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:50:37.665869 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:50:37.665876 | orchestrator | 2026-03-09 00:50:37.665882 | orchestrator | 2026-03-09 00:50:37.665887 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:50:37.665893 | orchestrator | Monday 09 March 2026 00:49:47 +0000 (0:00:07.410) 0:00:53.810 ********** 2026-03-09 00:50:37.665899 | orchestrator | =============================================================================== 2026-03-09 00:50:37.665905 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 31.89s 2026-03-09 00:50:37.665910 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 7.41s 2026-03-09 00:50:37.665916 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.78s 2026-03-09 00:50:37.665922 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 3.14s 2026-03-09 00:50:37.665928 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.83s 2026-03-09 00:50:37.665933 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.23s 2026-03-09 00:50:37.665939 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.28s 2026-03-09 00:50:37.665945 | orchestrator | 2026-03-09 00:50:37.665951 | orchestrator | 2026-03-09 00:50:37.665962 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-09 00:50:37.665968 | orchestrator | 2026-03-09 00:50:37.665974 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-09 00:50:37.665980 | orchestrator | Monday 09 March 2026 00:48:54 +0000 (0:00:00.517) 0:00:00.517 ********** 2026-03-09 00:50:37.665986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-09 00:50:37.666002 | orchestrator | 2026-03-09 00:50:37.666008 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-09 00:50:37.666052 | orchestrator | Monday 09 March 2026 00:48:55 +0000 (0:00:00.733) 0:00:01.256 ********** 2026-03-09 00:50:37.666060 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-09 00:50:37.666066 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-09 00:50:37.666072 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-09 00:50:37.666078 | orchestrator | 2026-03-09 00:50:37.666084 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-09 00:50:37.666089 | orchestrator | Monday 09 March 2026 00:48:58 +0000 (0:00:03.499) 0:00:04.755 ********** 2026-03-09 00:50:37.666095 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:37.666101 | orchestrator | 2026-03-09 00:50:37.666107 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-09 00:50:37.666125 | orchestrator | Monday 09 March 2026 00:49:03 +0000 (0:00:04.486) 0:00:09.242 ********** 2026-03-09 00:50:37.666131 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-09 00:50:37.666137 | orchestrator | ok: [testbed-manager] 2026-03-09 00:50:37.666143 | orchestrator | 2026-03-09 00:50:37.666149 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-09 00:50:37.666155 | orchestrator | Monday 09 March 2026 00:49:45 +0000 (0:00:42.938) 0:00:52.181 ********** 2026-03-09 00:50:37.666161 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:37.666167 | orchestrator | 2026-03-09 00:50:37.666174 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-09 00:50:37.666181 | orchestrator | Monday 09 March 2026 00:49:48 +0000 (0:00:02.528) 0:00:54.709 ********** 2026-03-09 00:50:37.666187 | orchestrator | ok: [testbed-manager] 2026-03-09 00:50:37.666194 | orchestrator | 2026-03-09 00:50:37.666201 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-09 00:50:37.666208 | orchestrator | Monday 09 March 2026 00:49:50 +0000 (0:00:02.109) 0:00:56.819 ********** 2026-03-09 00:50:37.666214 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:37.666221 | orchestrator | 2026-03-09 00:50:37.666235 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-09 00:50:37.666242 | orchestrator | Monday 09 March 2026 00:49:55 +0000 (0:00:04.718) 0:01:01.537 ********** 2026-03-09 00:50:37.666253 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:37.666260 | orchestrator | 2026-03-09 00:50:37.666267 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-09 00:50:37.666273 | orchestrator | Monday 09 March 2026 00:49:59 +0000 (0:00:04.192) 0:01:05.730 ********** 2026-03-09 00:50:37.666280 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:37.666287 | orchestrator | 2026-03-09 00:50:37.666316 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-09 00:50:37.666323 | orchestrator | Monday 09 March 2026 00:50:01 +0000 (0:00:01.684) 0:01:07.415 ********** 2026-03-09 00:50:37.666330 | orchestrator | ok: [testbed-manager] 2026-03-09 00:50:37.666336 | orchestrator | 2026-03-09 00:50:37.666343 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:50:37.666350 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:50:37.666363 | orchestrator | 2026-03-09 00:50:37.666370 | orchestrator | 2026-03-09 00:50:37.666377 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:50:37.666383 | orchestrator | Monday 09 March 2026 00:50:02 +0000 (0:00:00.983) 0:01:08.398 ********** 2026-03-09 00:50:37.666389 | orchestrator | =============================================================================== 2026-03-09 00:50:37.666395 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 42.93s 2026-03-09 00:50:37.666401 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 4.72s 2026-03-09 00:50:37.666406 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 4.49s 2026-03-09 00:50:37.666412 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 4.19s 2026-03-09 00:50:37.666418 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.50s 2026-03-09 00:50:37.666424 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.53s 2026-03-09 00:50:37.666429 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 2.11s 2026-03-09 00:50:37.666435 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.68s 2026-03-09 00:50:37.666441 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.98s 2026-03-09 00:50:37.666446 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.74s 2026-03-09 00:50:37.666452 | orchestrator | 2026-03-09 00:50:37.666458 | orchestrator | 2026-03-09 00:50:37.666464 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-09 00:50:37.666469 | orchestrator | 2026-03-09 00:50:37.666475 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-09 00:50:37.666481 | orchestrator | Monday 09 March 2026 00:49:17 +0000 (0:00:00.301) 0:00:00.302 ********** 2026-03-09 00:50:37.666486 | orchestrator | ok: [testbed-manager] 2026-03-09 00:50:37.666492 | orchestrator | 2026-03-09 00:50:37.666498 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-09 00:50:37.666504 | orchestrator | Monday 09 March 2026 00:49:20 +0000 (0:00:03.208) 0:00:03.510 ********** 2026-03-09 00:50:37.666509 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-09 00:50:37.666515 | orchestrator | 2026-03-09 00:50:37.666521 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-09 00:50:37.666527 | orchestrator | Monday 09 March 2026 00:49:22 +0000 (0:00:02.251) 0:00:05.762 ********** 2026-03-09 00:50:37.666532 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:37.666538 | orchestrator | 2026-03-09 00:50:37.666544 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-09 00:50:37.666550 | orchestrator | Monday 09 March 2026 00:49:24 +0000 (0:00:01.944) 0:00:07.706 ********** 2026-03-09 00:50:37.666556 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-09 00:50:37.666562 | orchestrator | ok: [testbed-manager] 2026-03-09 00:50:37.666567 | orchestrator | 2026-03-09 00:50:37.666573 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-09 00:50:37.666579 | orchestrator | Monday 09 March 2026 00:50:26 +0000 (0:01:02.064) 0:01:09.771 ********** 2026-03-09 00:50:37.666585 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:37.666590 | orchestrator | 2026-03-09 00:50:37.666596 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:50:37.666606 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:50:37.666612 | orchestrator | 2026-03-09 00:50:37.666618 | orchestrator | 2026-03-09 00:50:37.666624 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:50:37.666630 | orchestrator | Monday 09 March 2026 00:50:32 +0000 (0:00:05.146) 0:01:14.918 ********** 2026-03-09 00:50:37.666635 | orchestrator | =============================================================================== 2026-03-09 00:50:37.666646 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 62.06s 2026-03-09 00:50:37.666652 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 5.15s 2026-03-09 00:50:37.666657 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 3.21s 2026-03-09 00:50:37.666663 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 2.25s 2026-03-09 00:50:37.666669 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.94s 2026-03-09 00:50:37.669430 | orchestrator | 2026-03-09 00:50:37 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:37.669494 | orchestrator | 2026-03-09 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:40.743119 | orchestrator | 2026-03-09 00:50:40 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:40.745013 | orchestrator | 2026-03-09 00:50:40 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:40.749556 | orchestrator | 2026-03-09 00:50:40 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:40.753660 | orchestrator | 2026-03-09 00:50:40 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:40.754833 | orchestrator | 2026-03-09 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:43.827754 | orchestrator | 2026-03-09 00:50:43 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:43.830163 | orchestrator | 2026-03-09 00:50:43 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:43.832924 | orchestrator | 2026-03-09 00:50:43 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:43.834721 | orchestrator | 2026-03-09 00:50:43 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:43.834765 | orchestrator | 2026-03-09 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:46.886551 | orchestrator | 2026-03-09 00:50:46 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:46.887738 | orchestrator | 2026-03-09 00:50:46 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:46.892029 | orchestrator | 2026-03-09 00:50:46 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:46.894887 | orchestrator | 2026-03-09 00:50:46 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:46.894942 | orchestrator | 2026-03-09 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:49.970158 | orchestrator | 2026-03-09 00:50:49 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:49.976252 | orchestrator | 2026-03-09 00:50:49 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:49.979558 | orchestrator | 2026-03-09 00:50:49 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:50.003881 | orchestrator | 2026-03-09 00:50:49 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:50.003983 | orchestrator | 2026-03-09 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:53.049970 | orchestrator | 2026-03-09 00:50:53 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:53.050132 | orchestrator | 2026-03-09 00:50:53 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:53.050143 | orchestrator | 2026-03-09 00:50:53 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:53.050172 | orchestrator | 2026-03-09 00:50:53 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:53.050185 | orchestrator | 2026-03-09 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:56.107102 | orchestrator | 2026-03-09 00:50:56 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:56.108181 | orchestrator | 2026-03-09 00:50:56 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:56.110199 | orchestrator | 2026-03-09 00:50:56 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:56.113109 | orchestrator | 2026-03-09 00:50:56 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:56.113190 | orchestrator | 2026-03-09 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:59.164926 | orchestrator | 2026-03-09 00:50:59 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state STARTED 2026-03-09 00:50:59.167939 | orchestrator | 2026-03-09 00:50:59 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:50:59.170145 | orchestrator | 2026-03-09 00:50:59 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:50:59.171523 | orchestrator | 2026-03-09 00:50:59 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:50:59.171565 | orchestrator | 2026-03-09 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:02.221754 | orchestrator | 2026-03-09 00:51:02.221850 | orchestrator | 2026-03-09 00:51:02.221860 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:51:02.221867 | orchestrator | 2026-03-09 00:51:02.221885 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:51:02.221892 | orchestrator | Monday 09 March 2026 00:48:55 +0000 (0:00:00.553) 0:00:00.553 ********** 2026-03-09 00:51:02.221898 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-09 00:51:02.221904 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-09 00:51:02.221909 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-09 00:51:02.221915 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-09 00:51:02.221921 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-09 00:51:02.221926 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-09 00:51:02.221932 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-09 00:51:02.221937 | orchestrator | 2026-03-09 00:51:02.221943 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-09 00:51:02.221948 | orchestrator | 2026-03-09 00:51:02.221954 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-09 00:51:02.221960 | orchestrator | Monday 09 March 2026 00:48:56 +0000 (0:00:01.080) 0:00:01.633 ********** 2026-03-09 00:51:02.221975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2026-03-09 00:51:02.221982 | orchestrator | 2026-03-09 00:51:02.221988 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-09 00:51:02.221994 | orchestrator | Monday 09 March 2026 00:48:58 +0000 (0:00:01.693) 0:00:03.327 ********** 2026-03-09 00:51:02.222000 | orchestrator | ok: [testbed-manager] 2026-03-09 00:51:02.222006 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:51:02.222012 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:51:02.222063 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:51:02.222069 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:51:02.222091 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:51:02.222097 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:51:02.222103 | orchestrator | 2026-03-09 00:51:02.222108 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-09 00:51:02.222117 | orchestrator | Monday 09 March 2026 00:49:02 +0000 (0:00:04.336) 0:00:07.663 ********** 2026-03-09 00:51:02.222126 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:51:02.222134 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:51:02.222142 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:51:02.222151 | orchestrator | ok: [testbed-manager] 2026-03-09 00:51:02.222160 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:51:02.222168 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:51:02.222178 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:51:02.222187 | orchestrator | 2026-03-09 00:51:02.222195 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-09 00:51:02.222205 | orchestrator | Monday 09 March 2026 00:49:06 +0000 (0:00:03.988) 0:00:11.651 ********** 2026-03-09 00:51:02.222216 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:02.222225 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:02.222233 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:02.222241 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:02.222249 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:51:02.222257 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:51:02.222264 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:51:02.222295 | orchestrator | 2026-03-09 00:51:02.222306 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-09 00:51:02.222365 | orchestrator | Monday 09 March 2026 00:49:10 +0000 (0:00:03.759) 0:00:15.411 ********** 2026-03-09 00:51:02.222382 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:02.222391 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:02.222400 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:02.222410 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:51:02.222419 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:51:02.222429 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:51:02.222438 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:02.222447 | orchestrator | 2026-03-09 00:51:02.222457 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-09 00:51:02.222466 | orchestrator | Monday 09 March 2026 00:49:32 +0000 (0:00:21.920) 0:00:37.331 ********** 2026-03-09 00:51:02.222475 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:51:02.222484 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:02.222494 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:02.222502 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:02.222512 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:51:02.222521 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:51:02.222529 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:02.222538 | orchestrator | 2026-03-09 00:51:02.222548 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-09 00:51:02.222559 | orchestrator | Monday 09 March 2026 00:50:26 +0000 (0:00:53.802) 0:01:31.134 ********** 2026-03-09 00:51:02.222569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:51:02.222581 | orchestrator | 2026-03-09 00:51:02.222592 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-09 00:51:02.222602 | orchestrator | Monday 09 March 2026 00:50:28 +0000 (0:00:01.949) 0:01:33.083 ********** 2026-03-09 00:51:02.222611 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-09 00:51:02.222621 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-09 00:51:02.222631 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-09 00:51:02.222640 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-09 00:51:02.222673 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-09 00:51:02.222695 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-09 00:51:02.222704 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-09 00:51:02.222714 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-09 00:51:02.222723 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-09 00:51:02.222733 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-09 00:51:02.222743 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-09 00:51:02.222753 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-09 00:51:02.222763 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-09 00:51:02.222773 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-09 00:51:02.222782 | orchestrator | 2026-03-09 00:51:02.222788 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-09 00:51:02.222795 | orchestrator | Monday 09 March 2026 00:50:35 +0000 (0:00:07.603) 0:01:40.687 ********** 2026-03-09 00:51:02.222801 | orchestrator | ok: [testbed-manager] 2026-03-09 00:51:02.222807 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:51:02.222813 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:51:02.222818 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:51:02.222824 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:51:02.222830 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:51:02.222836 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:51:02.222841 | orchestrator | 2026-03-09 00:51:02.222857 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-09 00:51:02.222863 | orchestrator | Monday 09 March 2026 00:50:37 +0000 (0:00:01.667) 0:01:42.355 ********** 2026-03-09 00:51:02.222869 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:02.222875 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:02.222880 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:02.222886 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:02.222892 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:51:02.222898 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:51:02.222903 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:51:02.222909 | orchestrator | 2026-03-09 00:51:02.222915 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-09 00:51:02.222920 | orchestrator | Monday 09 March 2026 00:50:39 +0000 (0:00:02.180) 0:01:44.536 ********** 2026-03-09 00:51:02.222926 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:51:02.222932 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:51:02.222938 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:51:02.222943 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:51:02.222949 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:51:02.222955 | orchestrator | ok: [testbed-manager] 2026-03-09 00:51:02.222961 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:51:02.222966 | orchestrator | 2026-03-09 00:51:02.222972 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-09 00:51:02.222978 | orchestrator | Monday 09 March 2026 00:50:41 +0000 (0:00:02.093) 0:01:46.630 ********** 2026-03-09 00:51:02.222984 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:51:02.222989 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:51:02.222995 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:51:02.223001 | orchestrator | ok: [testbed-manager] 2026-03-09 00:51:02.223006 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:51:02.223012 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:51:02.223018 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:51:02.223023 | orchestrator | 2026-03-09 00:51:02.223029 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-09 00:51:02.223035 | orchestrator | Monday 09 March 2026 00:50:44 +0000 (0:00:02.684) 0:01:49.314 ********** 2026-03-09 00:51:02.223041 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-09 00:51:02.223049 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:51:02.223060 | orchestrator | 2026-03-09 00:51:02.223066 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-09 00:51:02.223072 | orchestrator | Monday 09 March 2026 00:50:46 +0000 (0:00:01.849) 0:01:51.163 ********** 2026-03-09 00:51:02.223078 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:02.223084 | orchestrator | 2026-03-09 00:51:02.223089 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-09 00:51:02.223095 | orchestrator | Monday 09 March 2026 00:50:48 +0000 (0:00:02.548) 0:01:53.711 ********** 2026-03-09 00:51:02.223101 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:02.223107 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:02.223113 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:02.223118 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:51:02.223124 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:51:02.223130 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:51:02.223135 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:02.223141 | orchestrator | 2026-03-09 00:51:02.223147 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:51:02.223153 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:51:02.223160 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:51:02.223166 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:51:02.223172 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:51:02.223185 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:51:02.223199 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:51:02.223209 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:51:02.223219 | orchestrator | 2026-03-09 00:51:02.223228 | orchestrator | 2026-03-09 00:51:02.223237 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:51:02.223246 | orchestrator | Monday 09 March 2026 00:51:00 +0000 (0:00:11.653) 0:02:05.365 ********** 2026-03-09 00:51:02.223255 | orchestrator | =============================================================================== 2026-03-09 00:51:02.223265 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 53.80s 2026-03-09 00:51:02.223299 | orchestrator | osism.services.netdata : Add repository -------------------------------- 21.92s 2026-03-09 00:51:02.223309 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.65s 2026-03-09 00:51:02.223320 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.60s 2026-03-09 00:51:02.223329 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 4.34s 2026-03-09 00:51:02.223338 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.99s 2026-03-09 00:51:02.223348 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.76s 2026-03-09 00:51:02.223358 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.68s 2026-03-09 00:51:02.223367 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.55s 2026-03-09 00:51:02.223377 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.18s 2026-03-09 00:51:02.223395 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.09s 2026-03-09 00:51:02.223405 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.95s 2026-03-09 00:51:02.223415 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.85s 2026-03-09 00:51:02.223425 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.69s 2026-03-09 00:51:02.223434 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.67s 2026-03-09 00:51:02.223443 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.08s 2026-03-09 00:51:02.223454 | orchestrator | 2026-03-09 00:51:02 | INFO  | Task b93fd672-da59-41f6-9f80-30f4a2433a29 is in state SUCCESS 2026-03-09 00:51:02.223565 | orchestrator | 2026-03-09 00:51:02 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:02.224653 | orchestrator | 2026-03-09 00:51:02 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:51:02.226378 | orchestrator | 2026-03-09 00:51:02 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:02.226593 | orchestrator | 2026-03-09 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:05.277409 | orchestrator | 2026-03-09 00:51:05 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:05.277836 | orchestrator | 2026-03-09 00:51:05 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:51:05.278108 | orchestrator | 2026-03-09 00:51:05 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:05.278124 | orchestrator | 2026-03-09 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:08.346258 | orchestrator | 2026-03-09 00:51:08 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:08.349651 | orchestrator | 2026-03-09 00:51:08 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:51:08.351568 | orchestrator | 2026-03-09 00:51:08 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:08.352228 | orchestrator | 2026-03-09 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:11.418387 | orchestrator | 2026-03-09 00:51:11 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:11.420340 | orchestrator | 2026-03-09 00:51:11 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:51:11.422690 | orchestrator | 2026-03-09 00:51:11 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:11.423323 | orchestrator | 2026-03-09 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:14.483850 | orchestrator | 2026-03-09 00:51:14 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:14.486078 | orchestrator | 2026-03-09 00:51:14 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:51:14.487976 | orchestrator | 2026-03-09 00:51:14 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:14.488742 | orchestrator | 2026-03-09 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:17.551292 | orchestrator | 2026-03-09 00:51:17 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:17.554965 | orchestrator | 2026-03-09 00:51:17 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:51:17.563025 | orchestrator | 2026-03-09 00:51:17 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:17.563114 | orchestrator | 2026-03-09 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:20.603566 | orchestrator | 2026-03-09 00:51:20 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:20.604997 | orchestrator | 2026-03-09 00:51:20 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:51:20.605035 | orchestrator | 2026-03-09 00:51:20 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:20.605049 | orchestrator | 2026-03-09 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:23.645096 | orchestrator | 2026-03-09 00:51:23 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:23.646165 | orchestrator | 2026-03-09 00:51:23 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:51:23.651163 | orchestrator | 2026-03-09 00:51:23 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:23.651315 | orchestrator | 2026-03-09 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:26.698244 | orchestrator | 2026-03-09 00:51:26 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:26.698367 | orchestrator | 2026-03-09 00:51:26 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:51:26.699407 | orchestrator | 2026-03-09 00:51:26 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:26.699444 | orchestrator | 2026-03-09 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:29.766082 | orchestrator | 2026-03-09 00:51:29 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:29.767557 | orchestrator | 2026-03-09 00:51:29 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:51:29.769927 | orchestrator | 2026-03-09 00:51:29 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:29.769967 | orchestrator | 2026-03-09 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:32.821869 | orchestrator | 2026-03-09 00:51:32 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:32.823837 | orchestrator | 2026-03-09 00:51:32 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state STARTED 2026-03-09 00:51:32.825488 | orchestrator | 2026-03-09 00:51:32 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:32.825741 | orchestrator | 2026-03-09 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:35.873063 | orchestrator | 2026-03-09 00:51:35 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:51:35.873678 | orchestrator | 2026-03-09 00:51:35 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:51:35.874752 | orchestrator | 2026-03-09 00:51:35 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:35.878740 | orchestrator | 2026-03-09 00:51:35 | INFO  | Task 4e604af3-f897-4a16-8bb4-231c477834f5 is in state SUCCESS 2026-03-09 00:51:35.883699 | orchestrator | 2026-03-09 00:51:35.883781 | orchestrator | 2026-03-09 00:51:35.883798 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-09 00:51:35.883815 | orchestrator | 2026-03-09 00:51:35.883830 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-09 00:51:35.883845 | orchestrator | Monday 09 March 2026 00:48:40 +0000 (0:00:00.326) 0:00:00.326 ********** 2026-03-09 00:51:35.883860 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:51:35.883905 | orchestrator | 2026-03-09 00:51:35.883919 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-09 00:51:35.883932 | orchestrator | Monday 09 March 2026 00:48:42 +0000 (0:00:01.706) 0:00:02.033 ********** 2026-03-09 00:51:35.883947 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:51:35.883962 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:51:35.883978 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:51:35.884003 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:51:35.884018 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:51:35.884033 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:51:35.884043 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:51:35.884052 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:51:35.884062 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:51:35.884071 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:51:35.884080 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:51:35.884088 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:51:35.884097 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:51:35.884106 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:51:35.884114 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:51:35.884123 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:51:35.884132 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:51:35.884140 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:51:35.884150 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:51:35.884158 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:51:35.884168 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:51:35.884182 | orchestrator | 2026-03-09 00:51:35.884195 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-09 00:51:35.884217 | orchestrator | Monday 09 March 2026 00:48:48 +0000 (0:00:06.348) 0:00:08.381 ********** 2026-03-09 00:51:35.884235 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:51:35.884274 | orchestrator | 2026-03-09 00:51:35.884289 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-09 00:51:35.884303 | orchestrator | Monday 09 March 2026 00:48:50 +0000 (0:00:01.817) 0:00:10.199 ********** 2026-03-09 00:51:35.884323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.884357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.884398 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.884419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.884431 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.884441 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.884452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.884463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.884481 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.884505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.884559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.884577 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.884588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.884599 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.884617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.884635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.884656 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.884666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.884679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.884688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.884697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.884706 | orchestrator | 2026-03-09 00:51:35.884716 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-09 00:51:35.884725 | orchestrator | Monday 09 March 2026 00:48:55 +0000 (0:00:05.100) 0:00:15.299 ********** 2026-03-09 00:51:35.884734 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:51:35.884744 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.884760 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.884773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:51:35.884783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.884793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.884802 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:51:35.884819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:51:35.884828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.884838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.884852 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:35.884863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:51:35.884872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.884893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.884902 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:51:35.884911 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:51:35.884929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:51:35.884938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.884947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.884956 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:51:35.884965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:51:35.884979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.884988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.884997 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:51:35.885017 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:51:35.885027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885050 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:51:35.885059 | orchestrator | 2026-03-09 00:51:35.885068 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-09 00:51:35.885076 | orchestrator | Monday 09 March 2026 00:48:58 +0000 (0:00:03.370) 0:00:18.670 ********** 2026-03-09 00:51:35.885085 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:51:35.885100 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885109 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885118 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:51:35.885127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:51:35.885141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:51:35.885174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885197 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:35.885206 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:51:35.885214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:51:35.885223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:51:35.885439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885466 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:51:35.885475 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:51:35.885483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:51:35.885492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885508 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:51:35.885516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:51:35.885531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.885548 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:51:35.885556 | orchestrator | 2026-03-09 00:51:35.885564 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-09 00:51:35.885572 | orchestrator | Monday 09 March 2026 00:49:03 +0000 (0:00:05.008) 0:00:23.679 ********** 2026-03-09 00:51:35.885580 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:51:35.885592 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:35.885600 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:51:35.885613 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:51:35.885621 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:51:35.885629 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:51:35.885637 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:51:35.885644 | orchestrator | 2026-03-09 00:51:35.885652 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-09 00:51:35.885660 | orchestrator | Monday 09 March 2026 00:49:05 +0000 (0:00:01.656) 0:00:25.336 ********** 2026-03-09 00:51:35.885668 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:51:35.885676 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:35.885684 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:51:35.885692 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:51:35.885699 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:51:35.885707 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:51:35.885715 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:51:35.885723 | orchestrator | 2026-03-09 00:51:35.885730 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-09 00:51:35.885738 | orchestrator | Monday 09 March 2026 00:49:08 +0000 (0:00:02.903) 0:00:28.239 ********** 2026-03-09 00:51:35.885747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.885755 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.885763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.885772 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.885785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.885798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.885812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.885820 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.885828 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.885837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.885845 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.885857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.885874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.885888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.885895 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.885902 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.885909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.885917 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.885924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.885935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.885947 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.885954 | orchestrator | 2026-03-09 00:51:35.885961 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-09 00:51:35.885968 | orchestrator | Monday 09 March 2026 00:49:20 +0000 (0:00:12.328) 0:00:40.567 ********** 2026-03-09 00:51:35.885975 | orchestrator | [WARNING]: Skipped 2026-03-09 00:51:35.885983 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-09 00:51:35.885990 | orchestrator | to this access issue: 2026-03-09 00:51:35.885997 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-09 00:51:35.886004 | orchestrator | directory 2026-03-09 00:51:35.886011 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:51:35.886067 | orchestrator | 2026-03-09 00:51:35.886075 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-09 00:51:35.886089 | orchestrator | Monday 09 March 2026 00:49:22 +0000 (0:00:02.178) 0:00:42.746 ********** 2026-03-09 00:51:35.886096 | orchestrator | [WARNING]: Skipped 2026-03-09 00:51:35.886103 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-09 00:51:35.886110 | orchestrator | to this access issue: 2026-03-09 00:51:35.886117 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-09 00:51:35.886124 | orchestrator | directory 2026-03-09 00:51:35.886131 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:51:35.886138 | orchestrator | 2026-03-09 00:51:35.886145 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-09 00:51:35.886152 | orchestrator | Monday 09 March 2026 00:49:24 +0000 (0:00:01.643) 0:00:44.390 ********** 2026-03-09 00:51:35.886159 | orchestrator | [WARNING]: Skipped 2026-03-09 00:51:35.886166 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-09 00:51:35.886172 | orchestrator | to this access issue: 2026-03-09 00:51:35.886179 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-09 00:51:35.886186 | orchestrator | directory 2026-03-09 00:51:35.886193 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:51:35.886199 | orchestrator | 2026-03-09 00:51:35.886206 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-09 00:51:35.886213 | orchestrator | Monday 09 March 2026 00:49:25 +0000 (0:00:01.500) 0:00:45.891 ********** 2026-03-09 00:51:35.886220 | orchestrator | [WARNING]: Skipped 2026-03-09 00:51:35.886226 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-09 00:51:35.886236 | orchestrator | to this access issue: 2026-03-09 00:51:35.886292 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-09 00:51:35.886307 | orchestrator | directory 2026-03-09 00:51:35.886318 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:51:35.886330 | orchestrator | 2026-03-09 00:51:35.886339 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-09 00:51:35.886346 | orchestrator | Monday 09 March 2026 00:49:27 +0000 (0:00:01.495) 0:00:47.387 ********** 2026-03-09 00:51:35.886353 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:35.886360 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:51:35.886367 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:35.886380 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:35.886387 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:51:35.886393 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:35.886400 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:51:35.886407 | orchestrator | 2026-03-09 00:51:35.886414 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-09 00:51:35.886421 | orchestrator | Monday 09 March 2026 00:49:37 +0000 (0:00:10.017) 0:00:57.404 ********** 2026-03-09 00:51:35.886427 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:51:35.886435 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:51:35.886442 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:51:35.886449 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:51:35.886456 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:51:35.886462 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:51:35.886469 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:51:35.886477 | orchestrator | 2026-03-09 00:51:35.886488 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-09 00:51:35.886498 | orchestrator | Monday 09 March 2026 00:49:45 +0000 (0:00:08.233) 0:01:05.638 ********** 2026-03-09 00:51:35.886508 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:35.886519 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:35.886529 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:35.886540 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:51:35.886556 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:35.886566 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:51:35.886576 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:51:35.886587 | orchestrator | 2026-03-09 00:51:35.886599 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-09 00:51:35.886611 | orchestrator | Monday 09 March 2026 00:49:51 +0000 (0:00:05.928) 0:01:11.567 ********** 2026-03-09 00:51:35.886622 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.886650 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.886659 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.886673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.886680 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.886687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.886705 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.886722 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.886733 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.886740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.886753 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.886760 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.886768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.886775 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.886786 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.886794 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.886804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.886811 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.886823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:51:35.886830 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.886837 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.886844 | orchestrator | 2026-03-09 00:51:35.886851 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-09 00:51:35.886858 | orchestrator | Monday 09 March 2026 00:49:56 +0000 (0:00:05.277) 0:01:16.844 ********** 2026-03-09 00:51:35.886865 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:51:35.886885 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:51:35.886892 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:51:35.886899 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:51:35.886905 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:51:35.886921 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:51:35.886927 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:51:35.886934 | orchestrator | 2026-03-09 00:51:35.886945 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-09 00:51:35.886953 | orchestrator | Monday 09 March 2026 00:50:02 +0000 (0:00:05.508) 0:01:22.353 ********** 2026-03-09 00:51:35.886960 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:51:35.886966 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:51:35.886973 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:51:35.886981 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:51:35.886987 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:51:35.886994 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:51:35.887001 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:51:35.887013 | orchestrator | 2026-03-09 00:51:35.887024 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-09 00:51:35.887031 | orchestrator | Monday 09 March 2026 00:50:06 +0000 (0:00:03.802) 0:01:26.155 ********** 2026-03-09 00:51:35.887038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.887045 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.887053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.887060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.887068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.887080 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.887087 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:51:35.887103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.887111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.887118 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.887125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.887132 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.887150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.887167 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.887174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.887183 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.887190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.887197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.887205 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.887212 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.887219 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:51:35.887226 | orchestrator | 2026-03-09 00:51:35.887232 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-09 00:51:35.887239 | orchestrator | Monday 09 March 2026 00:50:09 +0000 (0:00:03.574) 0:01:29.730 ********** 2026-03-09 00:51:35.887268 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:35.887276 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:35.887283 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:35.887290 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:35.887296 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:51:35.887303 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:51:35.887310 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:51:35.887316 | orchestrator | 2026-03-09 00:51:35.887323 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-09 00:51:35.887330 | orchestrator | Monday 09 March 2026 00:50:11 +0000 (0:00:01.839) 0:01:31.569 ********** 2026-03-09 00:51:35.887337 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:35.887344 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:35.887350 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:35.887357 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:35.887363 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:51:35.887370 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:51:35.887411 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:51:35.887419 | orchestrator | 2026-03-09 00:51:35.887427 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:51:35.887440 | orchestrator | Monday 09 March 2026 00:50:12 +0000 (0:00:01.229) 0:01:32.799 ********** 2026-03-09 00:51:35.887447 | orchestrator | 2026-03-09 00:51:35.887454 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:51:35.887460 | orchestrator | Monday 09 March 2026 00:50:12 +0000 (0:00:00.081) 0:01:32.881 ********** 2026-03-09 00:51:35.887467 | orchestrator | 2026-03-09 00:51:35.887474 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:51:35.887481 | orchestrator | Monday 09 March 2026 00:50:13 +0000 (0:00:00.077) 0:01:32.958 ********** 2026-03-09 00:51:35.887488 | orchestrator | 2026-03-09 00:51:35.887495 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:51:35.887502 | orchestrator | Monday 09 March 2026 00:50:13 +0000 (0:00:00.296) 0:01:33.255 ********** 2026-03-09 00:51:35.887510 | orchestrator | 2026-03-09 00:51:35.887521 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:51:35.887532 | orchestrator | Monday 09 March 2026 00:50:13 +0000 (0:00:00.095) 0:01:33.351 ********** 2026-03-09 00:51:35.887544 | orchestrator | 2026-03-09 00:51:35.887570 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:51:35.887587 | orchestrator | Monday 09 March 2026 00:50:13 +0000 (0:00:00.115) 0:01:33.466 ********** 2026-03-09 00:51:35.887599 | orchestrator | 2026-03-09 00:51:35.887611 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:51:35.887622 | orchestrator | Monday 09 March 2026 00:50:13 +0000 (0:00:00.075) 0:01:33.542 ********** 2026-03-09 00:51:35.887632 | orchestrator | 2026-03-09 00:51:35.887642 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-09 00:51:35.887653 | orchestrator | Monday 09 March 2026 00:50:13 +0000 (0:00:00.097) 0:01:33.640 ********** 2026-03-09 00:51:35.887665 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:35.887677 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:35.887688 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:35.887700 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:51:35.887712 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:51:35.887724 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:51:35.887735 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:35.887748 | orchestrator | 2026-03-09 00:51:35.887756 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-09 00:51:35.887765 | orchestrator | Monday 09 March 2026 00:50:47 +0000 (0:00:33.834) 0:02:07.475 ********** 2026-03-09 00:51:35.887777 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:35.887787 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:51:35.887807 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:51:35.887817 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:35.887828 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:35.887840 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:51:35.887851 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:35.887863 | orchestrator | 2026-03-09 00:51:35.887872 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-09 00:51:35.887879 | orchestrator | Monday 09 March 2026 00:51:20 +0000 (0:00:33.170) 0:02:40.645 ********** 2026-03-09 00:51:35.887885 | orchestrator | ok: [testbed-manager] 2026-03-09 00:51:35.887893 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:51:35.887899 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:51:35.887906 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:51:35.887913 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:51:35.887920 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:51:35.887926 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:51:35.887933 | orchestrator | 2026-03-09 00:51:35.887940 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-09 00:51:35.887947 | orchestrator | Monday 09 March 2026 00:51:23 +0000 (0:00:02.453) 0:02:43.099 ********** 2026-03-09 00:51:35.887954 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:35.887961 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:35.887968 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:35.887974 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:51:35.887981 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:51:35.887988 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:35.887994 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:51:35.888001 | orchestrator | 2026-03-09 00:51:35.888008 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:51:35.888016 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:51:35.888024 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:51:35.888031 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:51:35.888046 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:51:35.888054 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:51:35.888061 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:51:35.888068 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:51:35.888075 | orchestrator | 2026-03-09 00:51:35.888082 | orchestrator | 2026-03-09 00:51:35.888090 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:51:35.888097 | orchestrator | Monday 09 March 2026 00:51:33 +0000 (0:00:10.256) 0:02:53.356 ********** 2026-03-09 00:51:35.888105 | orchestrator | =============================================================================== 2026-03-09 00:51:35.888113 | orchestrator | common : Restart fluentd container ------------------------------------- 33.83s 2026-03-09 00:51:35.888120 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 33.17s 2026-03-09 00:51:35.888127 | orchestrator | common : Copying over config.json files for services ------------------- 12.33s 2026-03-09 00:51:35.888134 | orchestrator | common : Restart cron container ---------------------------------------- 10.26s 2026-03-09 00:51:35.888142 | orchestrator | common : Copying over fluentd.conf ------------------------------------- 10.02s 2026-03-09 00:51:35.888154 | orchestrator | common : Copying over cron logrotate config file ------------------------ 8.23s 2026-03-09 00:51:35.888161 | orchestrator | common : Ensuring config directories exist ------------------------------ 6.35s 2026-03-09 00:51:35.888169 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 5.93s 2026-03-09 00:51:35.888176 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 5.51s 2026-03-09 00:51:35.888183 | orchestrator | common : Ensuring config directories have correct owner and permission --- 5.28s 2026-03-09 00:51:35.888191 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.10s 2026-03-09 00:51:35.888198 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 5.01s 2026-03-09 00:51:35.888205 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.80s 2026-03-09 00:51:35.888212 | orchestrator | common : Check common containers ---------------------------------------- 3.57s 2026-03-09 00:51:35.888220 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.37s 2026-03-09 00:51:35.888227 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.90s 2026-03-09 00:51:35.888236 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.45s 2026-03-09 00:51:35.888288 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.18s 2026-03-09 00:51:35.888302 | orchestrator | common : Creating log volume -------------------------------------------- 1.84s 2026-03-09 00:51:35.888314 | orchestrator | common : include_tasks -------------------------------------------------- 1.82s 2026-03-09 00:51:35.888326 | orchestrator | 2026-03-09 00:51:35 | INFO  | Task 3c0ca48d-6c61-48c3-9abc-9088a74dcf3c is in state STARTED 2026-03-09 00:51:35.888338 | orchestrator | 2026-03-09 00:51:35 | INFO  | Task 2aa139a6-a273-4525-8f4f-c05e8010e0b5 is in state STARTED 2026-03-09 00:51:35.888480 | orchestrator | 2026-03-09 00:51:35 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:35.888492 | orchestrator | 2026-03-09 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:38.932612 | orchestrator | 2026-03-09 00:51:38 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:51:38.933130 | orchestrator | 2026-03-09 00:51:38 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:51:38.934201 | orchestrator | 2026-03-09 00:51:38 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:38.938268 | orchestrator | 2026-03-09 00:51:38 | INFO  | Task 3c0ca48d-6c61-48c3-9abc-9088a74dcf3c is in state STARTED 2026-03-09 00:51:38.943105 | orchestrator | 2026-03-09 00:51:38 | INFO  | Task 2aa139a6-a273-4525-8f4f-c05e8010e0b5 is in state STARTED 2026-03-09 00:51:38.945602 | orchestrator | 2026-03-09 00:51:38 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:38.945730 | orchestrator | 2026-03-09 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:41.997818 | orchestrator | 2026-03-09 00:51:41 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:51:42.001555 | orchestrator | 2026-03-09 00:51:41 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:51:42.002505 | orchestrator | 2026-03-09 00:51:42 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:42.003344 | orchestrator | 2026-03-09 00:51:42 | INFO  | Task 3c0ca48d-6c61-48c3-9abc-9088a74dcf3c is in state STARTED 2026-03-09 00:51:42.004795 | orchestrator | 2026-03-09 00:51:42 | INFO  | Task 2aa139a6-a273-4525-8f4f-c05e8010e0b5 is in state STARTED 2026-03-09 00:51:42.005862 | orchestrator | 2026-03-09 00:51:42 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:42.006095 | orchestrator | 2026-03-09 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:45.089982 | orchestrator | 2026-03-09 00:51:45 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:51:45.092385 | orchestrator | 2026-03-09 00:51:45 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:51:45.093215 | orchestrator | 2026-03-09 00:51:45 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:45.094145 | orchestrator | 2026-03-09 00:51:45 | INFO  | Task 3c0ca48d-6c61-48c3-9abc-9088a74dcf3c is in state STARTED 2026-03-09 00:51:45.099209 | orchestrator | 2026-03-09 00:51:45 | INFO  | Task 2aa139a6-a273-4525-8f4f-c05e8010e0b5 is in state STARTED 2026-03-09 00:51:45.099888 | orchestrator | 2026-03-09 00:51:45 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:45.099933 | orchestrator | 2026-03-09 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:48.156878 | orchestrator | 2026-03-09 00:51:48 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:51:48.156984 | orchestrator | 2026-03-09 00:51:48 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:51:48.158593 | orchestrator | 2026-03-09 00:51:48 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:48.159877 | orchestrator | 2026-03-09 00:51:48 | INFO  | Task 3c0ca48d-6c61-48c3-9abc-9088a74dcf3c is in state STARTED 2026-03-09 00:51:48.163137 | orchestrator | 2026-03-09 00:51:48 | INFO  | Task 2aa139a6-a273-4525-8f4f-c05e8010e0b5 is in state STARTED 2026-03-09 00:51:48.165653 | orchestrator | 2026-03-09 00:51:48 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:48.165713 | orchestrator | 2026-03-09 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:51.221980 | orchestrator | 2026-03-09 00:51:51 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:51:51.222640 | orchestrator | 2026-03-09 00:51:51 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:51:51.223970 | orchestrator | 2026-03-09 00:51:51 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:51.226140 | orchestrator | 2026-03-09 00:51:51 | INFO  | Task 3c0ca48d-6c61-48c3-9abc-9088a74dcf3c is in state STARTED 2026-03-09 00:51:51.226873 | orchestrator | 2026-03-09 00:51:51 | INFO  | Task 2aa139a6-a273-4525-8f4f-c05e8010e0b5 is in state STARTED 2026-03-09 00:51:51.228119 | orchestrator | 2026-03-09 00:51:51 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:51.228151 | orchestrator | 2026-03-09 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:54.278312 | orchestrator | 2026-03-09 00:51:54 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:51:54.278401 | orchestrator | 2026-03-09 00:51:54 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:51:54.279302 | orchestrator | 2026-03-09 00:51:54 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:54.279335 | orchestrator | 2026-03-09 00:51:54 | INFO  | Task 3c0ca48d-6c61-48c3-9abc-9088a74dcf3c is in state STARTED 2026-03-09 00:51:54.283388 | orchestrator | 2026-03-09 00:51:54 | INFO  | Task 2aa139a6-a273-4525-8f4f-c05e8010e0b5 is in state STARTED 2026-03-09 00:51:54.283997 | orchestrator | 2026-03-09 00:51:54 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:54.284133 | orchestrator | 2026-03-09 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:57.371806 | orchestrator | 2026-03-09 00:51:57 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:51:57.374911 | orchestrator | 2026-03-09 00:51:57 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:51:57.375003 | orchestrator | 2026-03-09 00:51:57 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:51:57.376038 | orchestrator | 2026-03-09 00:51:57 | INFO  | Task 3c0ca48d-6c61-48c3-9abc-9088a74dcf3c is in state SUCCESS 2026-03-09 00:51:57.377418 | orchestrator | 2026-03-09 00:51:57 | INFO  | Task 2aa139a6-a273-4525-8f4f-c05e8010e0b5 is in state STARTED 2026-03-09 00:51:57.378552 | orchestrator | 2026-03-09 00:51:57 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:51:57.379797 | orchestrator | 2026-03-09 00:51:57 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:51:57.379844 | orchestrator | 2026-03-09 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:00.435008 | orchestrator | 2026-03-09 00:52:00 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:00.435260 | orchestrator | 2026-03-09 00:52:00 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:00.436381 | orchestrator | 2026-03-09 00:52:00 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:00.437957 | orchestrator | 2026-03-09 00:52:00 | INFO  | Task 2aa139a6-a273-4525-8f4f-c05e8010e0b5 is in state STARTED 2026-03-09 00:52:00.438883 | orchestrator | 2026-03-09 00:52:00 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:00.439715 | orchestrator | 2026-03-09 00:52:00 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:00.439770 | orchestrator | 2026-03-09 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:03.504637 | orchestrator | 2026-03-09 00:52:03 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:03.508330 | orchestrator | 2026-03-09 00:52:03 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:03.510177 | orchestrator | 2026-03-09 00:52:03 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:03.510946 | orchestrator | 2026-03-09 00:52:03 | INFO  | Task 2aa139a6-a273-4525-8f4f-c05e8010e0b5 is in state STARTED 2026-03-09 00:52:03.511858 | orchestrator | 2026-03-09 00:52:03 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:03.513036 | orchestrator | 2026-03-09 00:52:03 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:03.513069 | orchestrator | 2026-03-09 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:06.667352 | orchestrator | 2026-03-09 00:52:06 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:06.670778 | orchestrator | 2026-03-09 00:52:06 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:06.672239 | orchestrator | 2026-03-09 00:52:06 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:06.674650 | orchestrator | 2026-03-09 00:52:06 | INFO  | Task 2aa139a6-a273-4525-8f4f-c05e8010e0b5 is in state STARTED 2026-03-09 00:52:06.676695 | orchestrator | 2026-03-09 00:52:06 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:06.678350 | orchestrator | 2026-03-09 00:52:06 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:06.679284 | orchestrator | 2026-03-09 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:09.784995 | orchestrator | 2026-03-09 00:52:09 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:09.785854 | orchestrator | 2026-03-09 00:52:09 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:09.785887 | orchestrator | 2026-03-09 00:52:09 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:09.786704 | orchestrator | 2026-03-09 00:52:09 | INFO  | Task 2aa139a6-a273-4525-8f4f-c05e8010e0b5 is in state STARTED 2026-03-09 00:52:09.787446 | orchestrator | 2026-03-09 00:52:09 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:09.788160 | orchestrator | 2026-03-09 00:52:09 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:09.788191 | orchestrator | 2026-03-09 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:12.871675 | orchestrator | 2026-03-09 00:52:12 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:12.871763 | orchestrator | 2026-03-09 00:52:12 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:12.871777 | orchestrator | 2026-03-09 00:52:12 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:12.871786 | orchestrator | 2026-03-09 00:52:12 | INFO  | Task 2aa139a6-a273-4525-8f4f-c05e8010e0b5 is in state STARTED 2026-03-09 00:52:12.871794 | orchestrator | 2026-03-09 00:52:12 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:12.871802 | orchestrator | 2026-03-09 00:52:12 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:12.871810 | orchestrator | 2026-03-09 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:15.904914 | orchestrator | 2026-03-09 00:52:15 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:15.905686 | orchestrator | 2026-03-09 00:52:15 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:15.906846 | orchestrator | 2026-03-09 00:52:15 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:15.909339 | orchestrator | 2026-03-09 00:52:15.909393 | orchestrator | 2026-03-09 00:52:15.909407 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:52:15.909419 | orchestrator | 2026-03-09 00:52:15.909430 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:52:15.909441 | orchestrator | Monday 09 March 2026 00:51:41 +0000 (0:00:00.591) 0:00:00.591 ********** 2026-03-09 00:52:15.909461 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:15.909481 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:15.909501 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:15.909520 | orchestrator | 2026-03-09 00:52:15.909533 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:52:15.909552 | orchestrator | Monday 09 March 2026 00:51:42 +0000 (0:00:00.391) 0:00:00.983 ********** 2026-03-09 00:52:15.909569 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-09 00:52:15.909587 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-09 00:52:15.909662 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-09 00:52:15.909687 | orchestrator | 2026-03-09 00:52:15.909707 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-09 00:52:15.909725 | orchestrator | 2026-03-09 00:52:15.909761 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-09 00:52:15.909773 | orchestrator | Monday 09 March 2026 00:51:42 +0000 (0:00:00.675) 0:00:01.659 ********** 2026-03-09 00:52:15.909783 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:52:15.909795 | orchestrator | 2026-03-09 00:52:15.909806 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-09 00:52:15.909816 | orchestrator | Monday 09 March 2026 00:51:43 +0000 (0:00:00.830) 0:00:02.490 ********** 2026-03-09 00:52:15.909827 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-09 00:52:15.909838 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-09 00:52:15.909851 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-09 00:52:15.909864 | orchestrator | 2026-03-09 00:52:15.909877 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-09 00:52:15.909890 | orchestrator | Monday 09 March 2026 00:51:44 +0000 (0:00:00.895) 0:00:03.385 ********** 2026-03-09 00:52:15.909902 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-09 00:52:15.909915 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-09 00:52:15.909927 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-09 00:52:15.909940 | orchestrator | 2026-03-09 00:52:15.909952 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-09 00:52:15.909965 | orchestrator | Monday 09 March 2026 00:51:48 +0000 (0:00:03.688) 0:00:07.073 ********** 2026-03-09 00:52:15.909978 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:15.909990 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:15.910003 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:15.910059 | orchestrator | 2026-03-09 00:52:15.910075 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-09 00:52:15.910088 | orchestrator | Monday 09 March 2026 00:51:51 +0000 (0:00:03.106) 0:00:10.180 ********** 2026-03-09 00:52:15.910101 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:15.910113 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:15.910125 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:15.910137 | orchestrator | 2026-03-09 00:52:15.910150 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:52:15.910163 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:52:15.910179 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:52:15.910191 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:52:15.910201 | orchestrator | 2026-03-09 00:52:15.910267 | orchestrator | 2026-03-09 00:52:15.910279 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:52:15.910290 | orchestrator | Monday 09 March 2026 00:51:53 +0000 (0:00:02.532) 0:00:12.713 ********** 2026-03-09 00:52:15.910301 | orchestrator | =============================================================================== 2026-03-09 00:52:15.910311 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.69s 2026-03-09 00:52:15.910322 | orchestrator | memcached : Check memcached container ----------------------------------- 3.11s 2026-03-09 00:52:15.910332 | orchestrator | memcached : Restart memcached container --------------------------------- 2.53s 2026-03-09 00:52:15.910343 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.90s 2026-03-09 00:52:15.910354 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.83s 2026-03-09 00:52:15.910364 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2026-03-09 00:52:15.910375 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2026-03-09 00:52:15.910394 | orchestrator | 2026-03-09 00:52:15.910405 | orchestrator | 2026-03-09 00:52:15.910416 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:52:15.910426 | orchestrator | 2026-03-09 00:52:15.910437 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:52:15.910448 | orchestrator | Monday 09 March 2026 00:51:40 +0000 (0:00:00.755) 0:00:00.755 ********** 2026-03-09 00:52:15.910458 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:15.910469 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:15.910479 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:15.910490 | orchestrator | 2026-03-09 00:52:15.910502 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:52:15.910539 | orchestrator | Monday 09 March 2026 00:51:41 +0000 (0:00:00.493) 0:00:01.249 ********** 2026-03-09 00:52:15.910551 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-09 00:52:15.910562 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-09 00:52:15.910573 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-09 00:52:15.910584 | orchestrator | 2026-03-09 00:52:15.910595 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-09 00:52:15.910606 | orchestrator | 2026-03-09 00:52:15.910617 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-09 00:52:15.910627 | orchestrator | Monday 09 March 2026 00:51:42 +0000 (0:00:00.842) 0:00:02.091 ********** 2026-03-09 00:52:15.910638 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:52:15.910649 | orchestrator | 2026-03-09 00:52:15.910660 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-09 00:52:15.910671 | orchestrator | Monday 09 March 2026 00:51:42 +0000 (0:00:00.798) 0:00:02.890 ********** 2026-03-09 00:52:15.910684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910780 | orchestrator | 2026-03-09 00:52:15.910791 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-09 00:52:15.910802 | orchestrator | Monday 09 March 2026 00:51:44 +0000 (0:00:01.532) 0:00:04.422 ********** 2026-03-09 00:52:15.910813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910901 | orchestrator | 2026-03-09 00:52:15.910912 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-09 00:52:15.910923 | orchestrator | Monday 09 March 2026 00:51:48 +0000 (0:00:04.460) 0:00:08.883 ********** 2026-03-09 00:52:15.910934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.910986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.911008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.911019 | orchestrator | 2026-03-09 00:52:15.911031 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-09 00:52:15.911041 | orchestrator | Monday 09 March 2026 00:51:52 +0000 (0:00:03.885) 0:00:12.768 ********** 2026-03-09 00:52:15.911053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.911064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.911076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.911094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.911105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.911127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:52:15.911139 | orchestrator | 2026-03-09 00:52:15.911151 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-09 00:52:15.911162 | orchestrator | Monday 09 March 2026 00:51:56 +0000 (0:00:03.633) 0:00:16.401 ********** 2026-03-09 00:52:15.911177 | orchestrator | 2026-03-09 00:52:15.911189 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-09 00:52:15.911200 | orchestrator | Monday 09 March 2026 00:51:56 +0000 (0:00:00.087) 0:00:16.488 ********** 2026-03-09 00:52:15.911238 | orchestrator | 2026-03-09 00:52:15.911258 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-09 00:52:15.911278 | orchestrator | Monday 09 March 2026 00:51:56 +0000 (0:00:00.129) 0:00:16.618 ********** 2026-03-09 00:52:15.911297 | orchestrator | 2026-03-09 00:52:15.911310 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-09 00:52:15.911320 | orchestrator | Monday 09 March 2026 00:51:57 +0000 (0:00:00.339) 0:00:16.957 ********** 2026-03-09 00:52:15.911331 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:15.911342 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:15.911352 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:15.911363 | orchestrator | 2026-03-09 00:52:15.911374 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-09 00:52:15.911385 | orchestrator | Monday 09 March 2026 00:52:08 +0000 (0:00:11.153) 0:00:28.110 ********** 2026-03-09 00:52:15.911396 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:15.911407 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:15.911418 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:15.911429 | orchestrator | 2026-03-09 00:52:15.911440 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:52:15.911451 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:52:15.911469 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:52:15.911480 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:52:15.911491 | orchestrator | 2026-03-09 00:52:15.911502 | orchestrator | 2026-03-09 00:52:15.911513 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:52:15.911523 | orchestrator | Monday 09 March 2026 00:52:11 +0000 (0:00:03.722) 0:00:31.832 ********** 2026-03-09 00:52:15.911534 | orchestrator | =============================================================================== 2026-03-09 00:52:15.911545 | orchestrator | redis : Restart redis container ---------------------------------------- 11.15s 2026-03-09 00:52:15.911556 | orchestrator | redis : Copying over default config.json files -------------------------- 4.46s 2026-03-09 00:52:15.911566 | orchestrator | redis : Copying over redis config files --------------------------------- 3.89s 2026-03-09 00:52:15.911577 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.72s 2026-03-09 00:52:15.911588 | orchestrator | redis : Check redis containers ------------------------------------------ 3.63s 2026-03-09 00:52:15.911599 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.53s 2026-03-09 00:52:15.911610 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2026-03-09 00:52:15.911621 | orchestrator | redis : include_tasks --------------------------------------------------- 0.80s 2026-03-09 00:52:15.911632 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.56s 2026-03-09 00:52:15.911643 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2026-03-09 00:52:15.911654 | orchestrator | 2026-03-09 00:52:15 | INFO  | Task 2aa139a6-a273-4525-8f4f-c05e8010e0b5 is in state SUCCESS 2026-03-09 00:52:15.911665 | orchestrator | 2026-03-09 00:52:15 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:15.912524 | orchestrator | 2026-03-09 00:52:15 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:15.913142 | orchestrator | 2026-03-09 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:19.004310 | orchestrator | 2026-03-09 00:52:19 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:19.009134 | orchestrator | 2026-03-09 00:52:19 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:19.012256 | orchestrator | 2026-03-09 00:52:19 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:19.015855 | orchestrator | 2026-03-09 00:52:19 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:19.017999 | orchestrator | 2026-03-09 00:52:19 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:19.018807 | orchestrator | 2026-03-09 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:22.058120 | orchestrator | 2026-03-09 00:52:22 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:22.064861 | orchestrator | 2026-03-09 00:52:22 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:22.064969 | orchestrator | 2026-03-09 00:52:22 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:22.064993 | orchestrator | 2026-03-09 00:52:22 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:22.065011 | orchestrator | 2026-03-09 00:52:22 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:22.065070 | orchestrator | 2026-03-09 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:25.267333 | orchestrator | 2026-03-09 00:52:25 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:25.271249 | orchestrator | 2026-03-09 00:52:25 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:25.273548 | orchestrator | 2026-03-09 00:52:25 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:25.274655 | orchestrator | 2026-03-09 00:52:25 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:25.281227 | orchestrator | 2026-03-09 00:52:25 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:25.281272 | orchestrator | 2026-03-09 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:28.395800 | orchestrator | 2026-03-09 00:52:28 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:28.397068 | orchestrator | 2026-03-09 00:52:28 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:28.397126 | orchestrator | 2026-03-09 00:52:28 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:28.397146 | orchestrator | 2026-03-09 00:52:28 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:28.397956 | orchestrator | 2026-03-09 00:52:28 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:28.397985 | orchestrator | 2026-03-09 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:31.445992 | orchestrator | 2026-03-09 00:52:31 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:31.447648 | orchestrator | 2026-03-09 00:52:31 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:31.448665 | orchestrator | 2026-03-09 00:52:31 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:31.449604 | orchestrator | 2026-03-09 00:52:31 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:31.451785 | orchestrator | 2026-03-09 00:52:31 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:31.451877 | orchestrator | 2026-03-09 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:34.518543 | orchestrator | 2026-03-09 00:52:34 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:34.524132 | orchestrator | 2026-03-09 00:52:34 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:34.533122 | orchestrator | 2026-03-09 00:52:34 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:34.534250 | orchestrator | 2026-03-09 00:52:34 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:34.539240 | orchestrator | 2026-03-09 00:52:34 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:34.539320 | orchestrator | 2026-03-09 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:37.665086 | orchestrator | 2026-03-09 00:52:37 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:37.666625 | orchestrator | 2026-03-09 00:52:37 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:37.668261 | orchestrator | 2026-03-09 00:52:37 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:37.670562 | orchestrator | 2026-03-09 00:52:37 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:37.672024 | orchestrator | 2026-03-09 00:52:37 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:37.672065 | orchestrator | 2026-03-09 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:40.707497 | orchestrator | 2026-03-09 00:52:40 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:40.708512 | orchestrator | 2026-03-09 00:52:40 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:40.710233 | orchestrator | 2026-03-09 00:52:40 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:40.714741 | orchestrator | 2026-03-09 00:52:40 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:40.717393 | orchestrator | 2026-03-09 00:52:40 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:40.717471 | orchestrator | 2026-03-09 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:43.868416 | orchestrator | 2026-03-09 00:52:43 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:43.869065 | orchestrator | 2026-03-09 00:52:43 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:43.870116 | orchestrator | 2026-03-09 00:52:43 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:43.871542 | orchestrator | 2026-03-09 00:52:43 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:43.872844 | orchestrator | 2026-03-09 00:52:43 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:43.872981 | orchestrator | 2026-03-09 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:46.916631 | orchestrator | 2026-03-09 00:52:46 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:46.917668 | orchestrator | 2026-03-09 00:52:46 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:46.919026 | orchestrator | 2026-03-09 00:52:46 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:46.920023 | orchestrator | 2026-03-09 00:52:46 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:46.920995 | orchestrator | 2026-03-09 00:52:46 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:46.921023 | orchestrator | 2026-03-09 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:50.004688 | orchestrator | 2026-03-09 00:52:50 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:50.005923 | orchestrator | 2026-03-09 00:52:50 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:50.007083 | orchestrator | 2026-03-09 00:52:50 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:50.008375 | orchestrator | 2026-03-09 00:52:50 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:50.009511 | orchestrator | 2026-03-09 00:52:50 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:50.009596 | orchestrator | 2026-03-09 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:53.206650 | orchestrator | 2026-03-09 00:52:53 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:53.208128 | orchestrator | 2026-03-09 00:52:53 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:53.208224 | orchestrator | 2026-03-09 00:52:53 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:53.209292 | orchestrator | 2026-03-09 00:52:53 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:53.209808 | orchestrator | 2026-03-09 00:52:53 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:53.209855 | orchestrator | 2026-03-09 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:56.257874 | orchestrator | 2026-03-09 00:52:56 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:56.258835 | orchestrator | 2026-03-09 00:52:56 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:56.260553 | orchestrator | 2026-03-09 00:52:56 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:56.261895 | orchestrator | 2026-03-09 00:52:56 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:56.264022 | orchestrator | 2026-03-09 00:52:56 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:56.264764 | orchestrator | 2026-03-09 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:59.292982 | orchestrator | 2026-03-09 00:52:59 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:52:59.294681 | orchestrator | 2026-03-09 00:52:59 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state STARTED 2026-03-09 00:52:59.296752 | orchestrator | 2026-03-09 00:52:59 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:52:59.298642 | orchestrator | 2026-03-09 00:52:59 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:52:59.300094 | orchestrator | 2026-03-09 00:52:59 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:52:59.300302 | orchestrator | 2026-03-09 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:02.337494 | orchestrator | 2026-03-09 00:53:02 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:02.342333 | orchestrator | 2026-03-09 00:53:02 | INFO  | Task c0aa5b6f-f082-4d40-91e6-6b0294b9c90d is in state SUCCESS 2026-03-09 00:53:02.343810 | orchestrator | 2026-03-09 00:53:02.343903 | orchestrator | 2026-03-09 00:53:02.343924 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:53:02.343942 | orchestrator | 2026-03-09 00:53:02.343958 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:53:02.343974 | orchestrator | Monday 09 March 2026 00:51:41 +0000 (0:00:00.473) 0:00:00.473 ********** 2026-03-09 00:53:02.343990 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:53:02.344008 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:53:02.344023 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:53:02.344038 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:53:02.344048 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:53:02.344056 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:53:02.344065 | orchestrator | 2026-03-09 00:53:02.344075 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:53:02.344084 | orchestrator | Monday 09 March 2026 00:51:42 +0000 (0:00:01.242) 0:00:01.716 ********** 2026-03-09 00:53:02.344093 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:53:02.344102 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:53:02.344111 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:53:02.344150 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:53:02.344195 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:53:02.344210 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:53:02.344243 | orchestrator | 2026-03-09 00:53:02.344268 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-09 00:53:02.344281 | orchestrator | 2026-03-09 00:53:02.344295 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-09 00:53:02.344310 | orchestrator | Monday 09 March 2026 00:51:43 +0000 (0:00:01.143) 0:00:02.859 ********** 2026-03-09 00:53:02.344326 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:53:02.344343 | orchestrator | 2026-03-09 00:53:02.344357 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-09 00:53:02.344371 | orchestrator | Monday 09 March 2026 00:51:45 +0000 (0:00:02.138) 0:00:04.998 ********** 2026-03-09 00:53:02.344386 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-09 00:53:02.344401 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-09 00:53:02.344417 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-09 00:53:02.344432 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-09 00:53:02.344447 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-09 00:53:02.344463 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-09 00:53:02.344477 | orchestrator | 2026-03-09 00:53:02.344492 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-09 00:53:02.344503 | orchestrator | Monday 09 March 2026 00:51:48 +0000 (0:00:02.218) 0:00:07.216 ********** 2026-03-09 00:53:02.344512 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-09 00:53:02.344521 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-09 00:53:02.344531 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-09 00:53:02.344541 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-09 00:53:02.344549 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-09 00:53:02.344558 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-09 00:53:02.344567 | orchestrator | 2026-03-09 00:53:02.344576 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-09 00:53:02.344585 | orchestrator | Monday 09 March 2026 00:51:50 +0000 (0:00:02.912) 0:00:10.129 ********** 2026-03-09 00:53:02.344593 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-09 00:53:02.344602 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:53:02.344611 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-09 00:53:02.344620 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:53:02.344629 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-09 00:53:02.344638 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:53:02.344646 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-09 00:53:02.344655 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:53:02.344679 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-09 00:53:02.344689 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:53:02.344697 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-09 00:53:02.344706 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:53:02.344714 | orchestrator | 2026-03-09 00:53:02.344723 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-09 00:53:02.344732 | orchestrator | Monday 09 March 2026 00:51:52 +0000 (0:00:01.697) 0:00:11.826 ********** 2026-03-09 00:53:02.344740 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:53:02.344749 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:53:02.344768 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:53:02.344777 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:53:02.344786 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:53:02.344795 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:53:02.344803 | orchestrator | 2026-03-09 00:53:02.344812 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-09 00:53:02.344821 | orchestrator | Monday 09 March 2026 00:51:54 +0000 (0:00:01.721) 0:00:13.547 ********** 2026-03-09 00:53:02.344852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.344873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.344890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.344906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.344929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.344956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.344988 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345005 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345020 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345035 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345058 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345095 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345111 | orchestrator | 2026-03-09 00:53:02.345124 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-09 00:53:02.345133 | orchestrator | Monday 09 March 2026 00:51:58 +0000 (0:00:03.867) 0:00:17.415 ********** 2026-03-09 00:53:02.345143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345224 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345303 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345427 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345442 | orchestrator | 2026-03-09 00:53:02.345457 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-09 00:53:02.345469 | orchestrator | Monday 09 March 2026 00:52:04 +0000 (0:00:05.836) 0:00:23.251 ********** 2026-03-09 00:53:02.345479 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:53:02.345488 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:53:02.345497 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:53:02.345506 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:53:02.345515 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:53:02.345525 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:53:02.345533 | orchestrator | 2026-03-09 00:53:02.345542 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-09 00:53:02.345551 | orchestrator | Monday 09 March 2026 00:52:06 +0000 (0:00:01.948) 0:00:25.200 ********** 2026-03-09 00:53:02.345561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345636 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345689 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345699 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345708 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:53:02.345717 | orchestrator | 2026-03-09 00:53:02.345727 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:53:02.345747 | orchestrator | Monday 09 March 2026 00:52:09 +0000 (0:00:03.453) 0:00:28.653 ********** 2026-03-09 00:53:02.345756 | orchestrator | 2026-03-09 00:53:02.345766 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:53:02.345775 | orchestrator | Monday 09 March 2026 00:52:09 +0000 (0:00:00.141) 0:00:28.795 ********** 2026-03-09 00:53:02.345783 | orchestrator | 2026-03-09 00:53:02.345792 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:53:02.345801 | orchestrator | Monday 09 March 2026 00:52:09 +0000 (0:00:00.327) 0:00:29.122 ********** 2026-03-09 00:53:02.345810 | orchestrator | 2026-03-09 00:53:02.345819 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:53:02.345828 | orchestrator | Monday 09 March 2026 00:52:10 +0000 (0:00:00.386) 0:00:29.509 ********** 2026-03-09 00:53:02.345838 | orchestrator | 2026-03-09 00:53:02.345846 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:53:02.345855 | orchestrator | Monday 09 March 2026 00:52:10 +0000 (0:00:00.221) 0:00:29.730 ********** 2026-03-09 00:53:02.345864 | orchestrator | 2026-03-09 00:53:02.345873 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:53:02.345882 | orchestrator | Monday 09 March 2026 00:52:10 +0000 (0:00:00.178) 0:00:29.908 ********** 2026-03-09 00:53:02.345891 | orchestrator | 2026-03-09 00:53:02.345900 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-09 00:53:02.345909 | orchestrator | Monday 09 March 2026 00:52:10 +0000 (0:00:00.226) 0:00:30.134 ********** 2026-03-09 00:53:02.345918 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:53:02.345927 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:53:02.345936 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:53:02.345945 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:53:02.345960 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:53:02.345975 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:53:02.345990 | orchestrator | 2026-03-09 00:53:02.346011 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-09 00:53:02.346098 | orchestrator | Monday 09 March 2026 00:52:21 +0000 (0:00:10.477) 0:00:40.612 ********** 2026-03-09 00:53:02.346116 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:53:02.346131 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:53:02.346147 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:53:02.346188 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:53:02.346204 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:53:02.346219 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:53:02.346234 | orchestrator | 2026-03-09 00:53:02.346250 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-09 00:53:02.346265 | orchestrator | Monday 09 March 2026 00:52:23 +0000 (0:00:02.174) 0:00:42.786 ********** 2026-03-09 00:53:02.346280 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:53:02.346294 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:53:02.346310 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:53:02.346336 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:53:02.346347 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:53:02.346355 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:53:02.346374 | orchestrator | 2026-03-09 00:53:02.346383 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-09 00:53:02.346393 | orchestrator | Monday 09 March 2026 00:52:34 +0000 (0:00:10.723) 0:00:53.510 ********** 2026-03-09 00:53:02.346414 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-09 00:53:02.346424 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-09 00:53:02.346433 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-09 00:53:02.346442 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-09 00:53:02.346463 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-09 00:53:02.346472 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-09 00:53:02.346481 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-09 00:53:02.346490 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-09 00:53:02.346498 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-09 00:53:02.346507 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-09 00:53:02.346516 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-09 00:53:02.346525 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-09 00:53:02.346534 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:53:02.346543 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:53:02.346552 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:53:02.346561 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:53:02.346570 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:53:02.346578 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:53:02.346587 | orchestrator | 2026-03-09 00:53:02.346596 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-09 00:53:02.346605 | orchestrator | Monday 09 March 2026 00:52:44 +0000 (0:00:10.160) 0:01:03.670 ********** 2026-03-09 00:53:02.346614 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-09 00:53:02.346623 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:53:02.346632 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-09 00:53:02.346641 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-09 00:53:02.346650 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:53:02.346659 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-09 00:53:02.346668 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:53:02.346677 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-09 00:53:02.346691 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-09 00:53:02.346707 | orchestrator | 2026-03-09 00:53:02.346723 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-09 00:53:02.346738 | orchestrator | Monday 09 March 2026 00:52:47 +0000 (0:00:03.450) 0:01:07.121 ********** 2026-03-09 00:53:02.346754 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-09 00:53:02.346771 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:53:02.346787 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-09 00:53:02.346803 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:53:02.346819 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-09 00:53:02.346844 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:53:02.346861 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-09 00:53:02.346877 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-09 00:53:02.346892 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-09 00:53:02.346919 | orchestrator | 2026-03-09 00:53:02.346936 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-09 00:53:02.346951 | orchestrator | Monday 09 March 2026 00:52:51 +0000 (0:00:03.502) 0:01:10.623 ********** 2026-03-09 00:53:02.346966 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:53:02.346982 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:53:02.346997 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:53:02.347010 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:53:02.347019 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:53:02.347028 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:53:02.347036 | orchestrator | 2026-03-09 00:53:02.347046 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:53:02.347055 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 00:53:02.347075 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 00:53:02.347085 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 00:53:02.347094 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 00:53:02.347103 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 00:53:02.347112 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 00:53:02.347121 | orchestrator | 2026-03-09 00:53:02.347130 | orchestrator | 2026-03-09 00:53:02.347139 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:53:02.347148 | orchestrator | Monday 09 March 2026 00:52:59 +0000 (0:00:08.043) 0:01:18.667 ********** 2026-03-09 00:53:02.347157 | orchestrator | =============================================================================== 2026-03-09 00:53:02.347191 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.77s 2026-03-09 00:53:02.347201 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.48s 2026-03-09 00:53:02.347210 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 10.16s 2026-03-09 00:53:02.347219 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.84s 2026-03-09 00:53:02.347228 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.87s 2026-03-09 00:53:02.347238 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.50s 2026-03-09 00:53:02.347247 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.45s 2026-03-09 00:53:02.347256 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.45s 2026-03-09 00:53:02.347265 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.91s 2026-03-09 00:53:02.347274 | orchestrator | module-load : Load modules ---------------------------------------------- 2.22s 2026-03-09 00:53:02.347283 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.17s 2026-03-09 00:53:02.347292 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.14s 2026-03-09 00:53:02.347301 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.95s 2026-03-09 00:53:02.347309 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.72s 2026-03-09 00:53:02.347318 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.70s 2026-03-09 00:53:02.347327 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.48s 2026-03-09 00:53:02.347344 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.24s 2026-03-09 00:53:02.347354 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.14s 2026-03-09 00:53:02.347363 | orchestrator | 2026-03-09 00:53:02 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:02.347372 | orchestrator | 2026-03-09 00:53:02 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:02.347682 | orchestrator | 2026-03-09 00:53:02 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:02.347745 | orchestrator | 2026-03-09 00:53:02 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:02.347756 | orchestrator | 2026-03-09 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:05.396464 | orchestrator | 2026-03-09 00:53:05 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:05.398204 | orchestrator | 2026-03-09 00:53:05 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:05.398871 | orchestrator | 2026-03-09 00:53:05 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:05.399483 | orchestrator | 2026-03-09 00:53:05 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:05.400329 | orchestrator | 2026-03-09 00:53:05 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:05.400355 | orchestrator | 2026-03-09 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:08.429878 | orchestrator | 2026-03-09 00:53:08 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:08.430548 | orchestrator | 2026-03-09 00:53:08 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:08.434340 | orchestrator | 2026-03-09 00:53:08 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:08.438257 | orchestrator | 2026-03-09 00:53:08 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:08.441360 | orchestrator | 2026-03-09 00:53:08 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:08.441428 | orchestrator | 2026-03-09 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:11.488409 | orchestrator | 2026-03-09 00:53:11 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:11.488858 | orchestrator | 2026-03-09 00:53:11 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:11.490132 | orchestrator | 2026-03-09 00:53:11 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:11.491015 | orchestrator | 2026-03-09 00:53:11 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:11.492354 | orchestrator | 2026-03-09 00:53:11 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:11.492505 | orchestrator | 2026-03-09 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:14.542781 | orchestrator | 2026-03-09 00:53:14 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:14.545076 | orchestrator | 2026-03-09 00:53:14 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:14.545986 | orchestrator | 2026-03-09 00:53:14 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:14.547119 | orchestrator | 2026-03-09 00:53:14 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:14.548108 | orchestrator | 2026-03-09 00:53:14 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:14.548142 | orchestrator | 2026-03-09 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:17.589421 | orchestrator | 2026-03-09 00:53:17 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:17.591507 | orchestrator | 2026-03-09 00:53:17 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:17.592749 | orchestrator | 2026-03-09 00:53:17 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:17.597033 | orchestrator | 2026-03-09 00:53:17 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:17.598319 | orchestrator | 2026-03-09 00:53:17 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:17.598470 | orchestrator | 2026-03-09 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:20.632122 | orchestrator | 2026-03-09 00:53:20 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:20.635278 | orchestrator | 2026-03-09 00:53:20 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:20.638094 | orchestrator | 2026-03-09 00:53:20 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:20.640782 | orchestrator | 2026-03-09 00:53:20 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:20.641757 | orchestrator | 2026-03-09 00:53:20 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:20.641793 | orchestrator | 2026-03-09 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:23.687962 | orchestrator | 2026-03-09 00:53:23 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:23.690294 | orchestrator | 2026-03-09 00:53:23 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:23.691188 | orchestrator | 2026-03-09 00:53:23 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:23.693992 | orchestrator | 2026-03-09 00:53:23 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:23.694332 | orchestrator | 2026-03-09 00:53:23 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:23.694407 | orchestrator | 2026-03-09 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:26.722214 | orchestrator | 2026-03-09 00:53:26 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:26.722893 | orchestrator | 2026-03-09 00:53:26 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:26.724930 | orchestrator | 2026-03-09 00:53:26 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:26.725803 | orchestrator | 2026-03-09 00:53:26 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:26.726774 | orchestrator | 2026-03-09 00:53:26 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:26.726904 | orchestrator | 2026-03-09 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:29.763600 | orchestrator | 2026-03-09 00:53:29 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:29.767153 | orchestrator | 2026-03-09 00:53:29 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:29.768206 | orchestrator | 2026-03-09 00:53:29 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:29.770360 | orchestrator | 2026-03-09 00:53:29 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:29.770414 | orchestrator | 2026-03-09 00:53:29 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:29.770425 | orchestrator | 2026-03-09 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:32.810721 | orchestrator | 2026-03-09 00:53:32 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:32.811171 | orchestrator | 2026-03-09 00:53:32 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:32.813989 | orchestrator | 2026-03-09 00:53:32 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:32.814678 | orchestrator | 2026-03-09 00:53:32 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:32.816816 | orchestrator | 2026-03-09 00:53:32 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:32.816869 | orchestrator | 2026-03-09 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:35.849894 | orchestrator | 2026-03-09 00:53:35 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:35.850268 | orchestrator | 2026-03-09 00:53:35 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:35.851225 | orchestrator | 2026-03-09 00:53:35 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:35.856194 | orchestrator | 2026-03-09 00:53:35 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:35.856757 | orchestrator | 2026-03-09 00:53:35 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:35.856795 | orchestrator | 2026-03-09 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:38.902903 | orchestrator | 2026-03-09 00:53:38 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:38.904088 | orchestrator | 2026-03-09 00:53:38 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:38.905069 | orchestrator | 2026-03-09 00:53:38 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:38.906780 | orchestrator | 2026-03-09 00:53:38 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:38.907505 | orchestrator | 2026-03-09 00:53:38 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:38.907526 | orchestrator | 2026-03-09 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:41.957931 | orchestrator | 2026-03-09 00:53:41 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:41.958162 | orchestrator | 2026-03-09 00:53:41 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:41.958199 | orchestrator | 2026-03-09 00:53:41 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:41.958221 | orchestrator | 2026-03-09 00:53:41 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:41.958241 | orchestrator | 2026-03-09 00:53:41 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:41.958262 | orchestrator | 2026-03-09 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:45.033430 | orchestrator | 2026-03-09 00:53:45 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:45.034451 | orchestrator | 2026-03-09 00:53:45 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:45.034486 | orchestrator | 2026-03-09 00:53:45 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:45.036165 | orchestrator | 2026-03-09 00:53:45 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:45.037236 | orchestrator | 2026-03-09 00:53:45 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:45.037730 | orchestrator | 2026-03-09 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:48.137269 | orchestrator | 2026-03-09 00:53:48 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:48.137373 | orchestrator | 2026-03-09 00:53:48 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:48.145063 | orchestrator | 2026-03-09 00:53:48 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:48.147188 | orchestrator | 2026-03-09 00:53:48 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:48.147391 | orchestrator | 2026-03-09 00:53:48 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:48.151703 | orchestrator | 2026-03-09 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:51.324040 | orchestrator | 2026-03-09 00:53:51 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:51.325235 | orchestrator | 2026-03-09 00:53:51 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:51.325261 | orchestrator | 2026-03-09 00:53:51 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:51.330312 | orchestrator | 2026-03-09 00:53:51 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:51.330763 | orchestrator | 2026-03-09 00:53:51 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:51.330790 | orchestrator | 2026-03-09 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:54.396347 | orchestrator | 2026-03-09 00:53:54 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:54.396528 | orchestrator | 2026-03-09 00:53:54 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:54.401799 | orchestrator | 2026-03-09 00:53:54 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:54.401866 | orchestrator | 2026-03-09 00:53:54 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:54.402075 | orchestrator | 2026-03-09 00:53:54 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:54.402174 | orchestrator | 2026-03-09 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:57.607007 | orchestrator | 2026-03-09 00:53:57 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:53:57.607090 | orchestrator | 2026-03-09 00:53:57 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:53:57.607128 | orchestrator | 2026-03-09 00:53:57 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:53:57.607137 | orchestrator | 2026-03-09 00:53:57 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:53:57.607144 | orchestrator | 2026-03-09 00:53:57 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:53:57.607175 | orchestrator | 2026-03-09 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:00.681327 | orchestrator | 2026-03-09 00:54:00 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:00.683714 | orchestrator | 2026-03-09 00:54:00 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:54:00.687048 | orchestrator | 2026-03-09 00:54:00 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:00.687576 | orchestrator | 2026-03-09 00:54:00 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:00.689359 | orchestrator | 2026-03-09 00:54:00 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:54:00.689438 | orchestrator | 2026-03-09 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:03.766127 | orchestrator | 2026-03-09 00:54:03 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:03.766670 | orchestrator | 2026-03-09 00:54:03 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:54:03.768367 | orchestrator | 2026-03-09 00:54:03 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:03.769027 | orchestrator | 2026-03-09 00:54:03 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:03.771968 | orchestrator | 2026-03-09 00:54:03 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:54:03.772012 | orchestrator | 2026-03-09 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:06.802788 | orchestrator | 2026-03-09 00:54:06 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:06.803432 | orchestrator | 2026-03-09 00:54:06 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:54:06.804657 | orchestrator | 2026-03-09 00:54:06 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:06.805484 | orchestrator | 2026-03-09 00:54:06 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:06.806638 | orchestrator | 2026-03-09 00:54:06 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:54:06.806678 | orchestrator | 2026-03-09 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:09.986964 | orchestrator | 2026-03-09 00:54:09 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:09.987566 | orchestrator | 2026-03-09 00:54:09 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state STARTED 2026-03-09 00:54:09.988573 | orchestrator | 2026-03-09 00:54:09 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:09.989287 | orchestrator | 2026-03-09 00:54:09 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:09.990205 | orchestrator | 2026-03-09 00:54:09 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:54:09.990247 | orchestrator | 2026-03-09 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:13.033740 | orchestrator | 2026-03-09 00:54:13 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:13.035224 | orchestrator | 2026-03-09 00:54:13 | INFO  | Task cdb29435-ac14-45e6-84fd-dc9caf95b4b0 is in state STARTED 2026-03-09 00:54:13.039182 | orchestrator | 2026-03-09 00:54:13 | INFO  | Task 9fe8c0cd-b672-4cf1-aed9-e91ececa32be is in state SUCCESS 2026-03-09 00:54:13.040976 | orchestrator | 2026-03-09 00:54:13.041022 | orchestrator | 2026-03-09 00:54:13.041034 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-09 00:54:13.041047 | orchestrator | 2026-03-09 00:54:13.041058 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-09 00:54:13.041070 | orchestrator | Monday 09 March 2026 00:48:41 +0000 (0:00:00.259) 0:00:00.259 ********** 2026-03-09 00:54:13.041128 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:54:13.041142 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:54:13.041153 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:54:13.041164 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.041175 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.041186 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.041197 | orchestrator | 2026-03-09 00:54:13.041208 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-09 00:54:13.041219 | orchestrator | Monday 09 March 2026 00:48:42 +0000 (0:00:01.093) 0:00:01.353 ********** 2026-03-09 00:54:13.041231 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.041244 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.041255 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.041265 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.041277 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.041288 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.041298 | orchestrator | 2026-03-09 00:54:13.041309 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-09 00:54:13.041321 | orchestrator | Monday 09 March 2026 00:48:43 +0000 (0:00:00.956) 0:00:02.309 ********** 2026-03-09 00:54:13.041331 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.041342 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.041353 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.041364 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.041374 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.041385 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.041396 | orchestrator | 2026-03-09 00:54:13.041407 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-09 00:54:13.041418 | orchestrator | Monday 09 March 2026 00:48:44 +0000 (0:00:01.166) 0:00:03.475 ********** 2026-03-09 00:54:13.041429 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:54:13.041439 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:54:13.041450 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:54:13.041461 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.041472 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:13.041482 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:13.041493 | orchestrator | 2026-03-09 00:54:13.041504 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-09 00:54:13.041515 | orchestrator | Monday 09 March 2026 00:48:47 +0000 (0:00:02.942) 0:00:06.418 ********** 2026-03-09 00:54:13.041526 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:54:13.041537 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:54:13.041550 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:54:13.041562 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.041575 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:13.041589 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:13.041608 | orchestrator | 2026-03-09 00:54:13.041674 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-09 00:54:13.041700 | orchestrator | Monday 09 March 2026 00:48:49 +0000 (0:00:01.859) 0:00:08.278 ********** 2026-03-09 00:54:13.041722 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:54:13.041745 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:54:13.041765 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:54:13.041783 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.041797 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:13.041810 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:13.041823 | orchestrator | 2026-03-09 00:54:13.041851 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-09 00:54:13.041863 | orchestrator | Monday 09 March 2026 00:48:50 +0000 (0:00:01.507) 0:00:09.786 ********** 2026-03-09 00:54:13.041887 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.041904 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.041927 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.041952 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.041970 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.041987 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.042005 | orchestrator | 2026-03-09 00:54:13.042143 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-09 00:54:13.042166 | orchestrator | Monday 09 March 2026 00:48:51 +0000 (0:00:01.061) 0:00:10.848 ********** 2026-03-09 00:54:13.042185 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.042204 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.042223 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.042238 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.042254 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.042271 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.042290 | orchestrator | 2026-03-09 00:54:13.042309 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-09 00:54:13.042329 | orchestrator | Monday 09 March 2026 00:48:52 +0000 (0:00:01.084) 0:00:11.933 ********** 2026-03-09 00:54:13.042349 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:54:13.042368 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:54:13.042386 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.042397 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:54:13.042408 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:54:13.042419 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.042430 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:54:13.042441 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:54:13.042452 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.042463 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:54:13.042492 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:54:13.042503 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.042514 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:54:13.042525 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:54:13.042536 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.042546 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:54:13.042557 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:54:13.042568 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.042579 | orchestrator | 2026-03-09 00:54:13.042589 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-09 00:54:13.042600 | orchestrator | Monday 09 March 2026 00:48:53 +0000 (0:00:00.838) 0:00:12.771 ********** 2026-03-09 00:54:13.042611 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.042622 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.042633 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.042644 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.042655 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.042666 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.042676 | orchestrator | 2026-03-09 00:54:13.042687 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-09 00:54:13.042713 | orchestrator | Monday 09 March 2026 00:48:54 +0000 (0:00:01.331) 0:00:14.102 ********** 2026-03-09 00:54:13.042725 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:54:13.042736 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:54:13.042747 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:54:13.042758 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.042768 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.042779 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.042790 | orchestrator | 2026-03-09 00:54:13.042801 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-09 00:54:13.042812 | orchestrator | Monday 09 March 2026 00:48:56 +0000 (0:00:01.313) 0:00:15.416 ********** 2026-03-09 00:54:13.042823 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.042834 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:54:13.042844 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:54:13.042855 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:54:13.042866 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:13.042876 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:13.042887 | orchestrator | 2026-03-09 00:54:13.042898 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-09 00:54:13.042909 | orchestrator | Monday 09 March 2026 00:49:03 +0000 (0:00:07.079) 0:00:22.496 ********** 2026-03-09 00:54:13.042919 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.042930 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.042941 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.042952 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.042963 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.042975 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.042993 | orchestrator | 2026-03-09 00:54:13.043012 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-09 00:54:13.043034 | orchestrator | Monday 09 March 2026 00:49:05 +0000 (0:00:02.013) 0:00:24.509 ********** 2026-03-09 00:54:13.043062 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.043145 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.043166 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.043184 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.043202 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.043220 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.043238 | orchestrator | 2026-03-09 00:54:13.043257 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-09 00:54:13.043289 | orchestrator | Monday 09 March 2026 00:49:08 +0000 (0:00:02.837) 0:00:27.346 ********** 2026-03-09 00:54:13.043307 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.043325 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.043343 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.043363 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.043383 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.043403 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.043421 | orchestrator | 2026-03-09 00:54:13.043441 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-09 00:54:13.043452 | orchestrator | Monday 09 March 2026 00:49:09 +0000 (0:00:01.338) 0:00:28.685 ********** 2026-03-09 00:54:13.043464 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-09 00:54:13.043475 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-09 00:54:13.043486 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-09 00:54:13.043497 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-09 00:54:13.043508 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.043519 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.043530 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-09 00:54:13.043541 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-09 00:54:13.043551 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-09 00:54:13.043576 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.043586 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-09 00:54:13.043597 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.043607 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-09 00:54:13.043617 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-09 00:54:13.043626 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.043636 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-09 00:54:13.043645 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-09 00:54:13.043655 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.043664 | orchestrator | 2026-03-09 00:54:13.043674 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-09 00:54:13.043696 | orchestrator | Monday 09 March 2026 00:49:11 +0000 (0:00:02.044) 0:00:30.729 ********** 2026-03-09 00:54:13.043706 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.043716 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.043725 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.043735 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.043744 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.043753 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.043763 | orchestrator | 2026-03-09 00:54:13.043773 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-09 00:54:13.043785 | orchestrator | Monday 09 March 2026 00:49:14 +0000 (0:00:02.582) 0:00:33.312 ********** 2026-03-09 00:54:13.043802 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.043818 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.043833 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.043850 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.043866 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.043883 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.043901 | orchestrator | 2026-03-09 00:54:13.043912 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-09 00:54:13.043921 | orchestrator | 2026-03-09 00:54:13.043931 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-09 00:54:13.043941 | orchestrator | Monday 09 March 2026 00:49:17 +0000 (0:00:03.630) 0:00:36.942 ********** 2026-03-09 00:54:13.043951 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.043961 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.043970 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.043980 | orchestrator | 2026-03-09 00:54:13.043989 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-09 00:54:13.043999 | orchestrator | Monday 09 March 2026 00:49:21 +0000 (0:00:03.581) 0:00:40.524 ********** 2026-03-09 00:54:13.044010 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.044019 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.044029 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.044039 | orchestrator | 2026-03-09 00:54:13.044049 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-09 00:54:13.044059 | orchestrator | Monday 09 March 2026 00:49:24 +0000 (0:00:03.321) 0:00:43.846 ********** 2026-03-09 00:54:13.044068 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.044098 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.044109 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.044118 | orchestrator | 2026-03-09 00:54:13.044128 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-09 00:54:13.044138 | orchestrator | Monday 09 March 2026 00:49:26 +0000 (0:00:02.017) 0:00:45.863 ********** 2026-03-09 00:54:13.044148 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.044157 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.044167 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.044176 | orchestrator | 2026-03-09 00:54:13.044186 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-09 00:54:13.044204 | orchestrator | Monday 09 March 2026 00:49:27 +0000 (0:00:01.072) 0:00:46.936 ********** 2026-03-09 00:54:13.044214 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.044223 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.044233 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.044243 | orchestrator | 2026-03-09 00:54:13.044252 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-09 00:54:13.044262 | orchestrator | Monday 09 March 2026 00:49:29 +0000 (0:00:01.661) 0:00:48.598 ********** 2026-03-09 00:54:13.044272 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.044281 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:13.044291 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:13.044300 | orchestrator | 2026-03-09 00:54:13.044310 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-09 00:54:13.044320 | orchestrator | Monday 09 March 2026 00:49:31 +0000 (0:00:01.843) 0:00:50.441 ********** 2026-03-09 00:54:13.044335 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.044345 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:13.044356 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:13.044365 | orchestrator | 2026-03-09 00:54:13.044378 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-09 00:54:13.044393 | orchestrator | Monday 09 March 2026 00:49:34 +0000 (0:00:03.383) 0:00:53.824 ********** 2026-03-09 00:54:13.044409 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:54:13.044425 | orchestrator | 2026-03-09 00:54:13.044442 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-09 00:54:13.044457 | orchestrator | Monday 09 March 2026 00:49:35 +0000 (0:00:00.961) 0:00:54.786 ********** 2026-03-09 00:54:13.044475 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.044491 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.044508 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.044518 | orchestrator | 2026-03-09 00:54:13.044528 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-09 00:54:13.044537 | orchestrator | Monday 09 March 2026 00:49:42 +0000 (0:00:07.064) 0:01:01.851 ********** 2026-03-09 00:54:13.044547 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.044557 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.044567 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.044576 | orchestrator | 2026-03-09 00:54:13.044586 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-09 00:54:13.044596 | orchestrator | Monday 09 March 2026 00:49:43 +0000 (0:00:00.978) 0:01:02.830 ********** 2026-03-09 00:54:13.044605 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.044615 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.044625 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.044634 | orchestrator | 2026-03-09 00:54:13.044643 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-09 00:54:13.044653 | orchestrator | Monday 09 March 2026 00:49:45 +0000 (0:00:01.738) 0:01:04.568 ********** 2026-03-09 00:54:13.044663 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.044672 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.044682 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.044692 | orchestrator | 2026-03-09 00:54:13.044702 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-09 00:54:13.044720 | orchestrator | Monday 09 March 2026 00:49:48 +0000 (0:00:02.838) 0:01:07.406 ********** 2026-03-09 00:54:13.044730 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.044740 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.044750 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.044759 | orchestrator | 2026-03-09 00:54:13.044769 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-09 00:54:13.044779 | orchestrator | Monday 09 March 2026 00:49:49 +0000 (0:00:01.155) 0:01:08.562 ********** 2026-03-09 00:54:13.044797 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.044807 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.044817 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.044826 | orchestrator | 2026-03-09 00:54:13.044836 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-09 00:54:13.044846 | orchestrator | Monday 09 March 2026 00:49:50 +0000 (0:00:01.108) 0:01:09.671 ********** 2026-03-09 00:54:13.044855 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:13.044865 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.044875 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:13.044884 | orchestrator | 2026-03-09 00:54:13.044894 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-09 00:54:13.044904 | orchestrator | Monday 09 March 2026 00:49:53 +0000 (0:00:03.098) 0:01:12.770 ********** 2026-03-09 00:54:13.044914 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.044923 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.044933 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.044943 | orchestrator | 2026-03-09 00:54:13.044952 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-09 00:54:13.044962 | orchestrator | Monday 09 March 2026 00:49:56 +0000 (0:00:02.643) 0:01:15.414 ********** 2026-03-09 00:54:13.044971 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.044981 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.044991 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.045000 | orchestrator | 2026-03-09 00:54:13.045010 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-09 00:54:13.045020 | orchestrator | Monday 09 March 2026 00:49:57 +0000 (0:00:01.396) 0:01:16.810 ********** 2026-03-09 00:54:13.045030 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-09 00:54:13.045041 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-09 00:54:13.045051 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-09 00:54:13.045061 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-09 00:54:13.045070 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-09 00:54:13.045134 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-09 00:54:13.045148 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-09 00:54:13.045164 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-09 00:54:13.045174 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-09 00:54:13.045184 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-09 00:54:13.045194 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-09 00:54:13.045202 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-09 00:54:13.045210 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-09 00:54:13.045218 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-09 00:54:13.045232 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-09 00:54:13.045240 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.045248 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.045256 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.045264 | orchestrator | 2026-03-09 00:54:13.045275 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-09 00:54:13.045288 | orchestrator | Monday 09 March 2026 00:50:52 +0000 (0:00:54.859) 0:02:11.670 ********** 2026-03-09 00:54:13.045301 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.045313 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.045391 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.045404 | orchestrator | 2026-03-09 00:54:13.045417 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-09 00:54:13.045438 | orchestrator | Monday 09 March 2026 00:50:53 +0000 (0:00:00.746) 0:02:12.416 ********** 2026-03-09 00:54:13.045452 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.045466 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:13.045479 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:13.045493 | orchestrator | 2026-03-09 00:54:13.045501 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-09 00:54:13.045509 | orchestrator | Monday 09 March 2026 00:50:54 +0000 (0:00:01.500) 0:02:13.916 ********** 2026-03-09 00:54:13.045517 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:13.045524 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.045532 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:13.045544 | orchestrator | 2026-03-09 00:54:13.045595 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-09 00:54:13.045613 | orchestrator | Monday 09 March 2026 00:50:56 +0000 (0:00:02.026) 0:02:15.942 ********** 2026-03-09 00:54:13.045626 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:13.045640 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.045648 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:13.045656 | orchestrator | 2026-03-09 00:54:13.045664 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-09 00:54:13.045672 | orchestrator | Monday 09 March 2026 00:51:21 +0000 (0:00:25.192) 0:02:41.135 ********** 2026-03-09 00:54:13.045680 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.045688 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.045696 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.045704 | orchestrator | 2026-03-09 00:54:13.045711 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-09 00:54:13.045719 | orchestrator | Monday 09 March 2026 00:51:22 +0000 (0:00:00.808) 0:02:41.943 ********** 2026-03-09 00:54:13.045727 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.045735 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.045743 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.045750 | orchestrator | 2026-03-09 00:54:13.045758 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-09 00:54:13.045766 | orchestrator | Monday 09 March 2026 00:51:23 +0000 (0:00:00.685) 0:02:42.628 ********** 2026-03-09 00:54:13.045774 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.045782 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:13.045791 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:13.045799 | orchestrator | 2026-03-09 00:54:13.045806 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-09 00:54:13.045814 | orchestrator | Monday 09 March 2026 00:51:24 +0000 (0:00:01.123) 0:02:43.752 ********** 2026-03-09 00:54:13.045822 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.045830 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.045838 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.045846 | orchestrator | 2026-03-09 00:54:13.045853 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-09 00:54:13.045869 | orchestrator | Monday 09 March 2026 00:51:25 +0000 (0:00:00.912) 0:02:44.665 ********** 2026-03-09 00:54:13.045877 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.045885 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.045893 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.045901 | orchestrator | 2026-03-09 00:54:13.045908 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-09 00:54:13.045916 | orchestrator | Monday 09 March 2026 00:51:25 +0000 (0:00:00.381) 0:02:45.047 ********** 2026-03-09 00:54:13.045924 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.045932 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:13.045940 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:13.045948 | orchestrator | 2026-03-09 00:54:13.045956 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-09 00:54:13.045964 | orchestrator | Monday 09 March 2026 00:51:26 +0000 (0:00:00.686) 0:02:45.733 ********** 2026-03-09 00:54:13.045972 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.045980 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:13.045987 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:13.045995 | orchestrator | 2026-03-09 00:54:13.046008 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-09 00:54:13.046049 | orchestrator | Monday 09 March 2026 00:51:27 +0000 (0:00:00.672) 0:02:46.406 ********** 2026-03-09 00:54:13.046057 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.046065 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:13.046073 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:13.046108 | orchestrator | 2026-03-09 00:54:13.046117 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-09 00:54:13.046125 | orchestrator | Monday 09 March 2026 00:51:28 +0000 (0:00:01.139) 0:02:47.546 ********** 2026-03-09 00:54:13.046133 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:13.046141 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:13.046149 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:13.046157 | orchestrator | 2026-03-09 00:54:13.046165 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-09 00:54:13.046173 | orchestrator | Monday 09 March 2026 00:51:29 +0000 (0:00:00.803) 0:02:48.349 ********** 2026-03-09 00:54:13.046181 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.046189 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.046197 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.046205 | orchestrator | 2026-03-09 00:54:13.046213 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-09 00:54:13.046221 | orchestrator | Monday 09 March 2026 00:51:29 +0000 (0:00:00.302) 0:02:48.652 ********** 2026-03-09 00:54:13.046229 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.046237 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.046245 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.046253 | orchestrator | 2026-03-09 00:54:13.046261 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-09 00:54:13.046269 | orchestrator | Monday 09 March 2026 00:51:29 +0000 (0:00:00.376) 0:02:49.028 ********** 2026-03-09 00:54:13.046277 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.046285 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.046293 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.046301 | orchestrator | 2026-03-09 00:54:13.046309 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-09 00:54:13.046317 | orchestrator | Monday 09 March 2026 00:51:30 +0000 (0:00:00.933) 0:02:49.961 ********** 2026-03-09 00:54:13.046325 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.046341 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.046349 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.046357 | orchestrator | 2026-03-09 00:54:13.046366 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-09 00:54:13.046380 | orchestrator | Monday 09 March 2026 00:51:31 +0000 (0:00:00.724) 0:02:50.685 ********** 2026-03-09 00:54:13.046389 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-09 00:54:13.046397 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-09 00:54:13.046405 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-09 00:54:13.046413 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-09 00:54:13.046421 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-09 00:54:13.046429 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-09 00:54:13.046437 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-09 00:54:13.046445 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-09 00:54:13.046453 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-09 00:54:13.046461 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-09 00:54:13.046469 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-09 00:54:13.046477 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-09 00:54:13.046485 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-09 00:54:13.046493 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-09 00:54:13.046501 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-09 00:54:13.046509 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-09 00:54:13.046517 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-09 00:54:13.046525 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-09 00:54:13.046533 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-09 00:54:13.046541 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-09 00:54:13.046549 | orchestrator | 2026-03-09 00:54:13.046558 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-09 00:54:13.046565 | orchestrator | 2026-03-09 00:54:13.046573 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-09 00:54:13.046581 | orchestrator | Monday 09 March 2026 00:51:34 +0000 (0:00:02.802) 0:02:53.488 ********** 2026-03-09 00:54:13.046589 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:54:13.046601 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:54:13.046610 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:54:13.046618 | orchestrator | 2026-03-09 00:54:13.046626 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-09 00:54:13.046634 | orchestrator | Monday 09 March 2026 00:51:34 +0000 (0:00:00.647) 0:02:54.135 ********** 2026-03-09 00:54:13.046642 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:54:13.046650 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:54:13.046658 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:54:13.046666 | orchestrator | 2026-03-09 00:54:13.046674 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-09 00:54:13.046682 | orchestrator | Monday 09 March 2026 00:51:35 +0000 (0:00:00.634) 0:02:54.770 ********** 2026-03-09 00:54:13.046690 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:54:13.046698 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:54:13.046706 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:54:13.046723 | orchestrator | 2026-03-09 00:54:13.046731 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-09 00:54:13.046739 | orchestrator | Monday 09 March 2026 00:51:36 +0000 (0:00:00.428) 0:02:55.198 ********** 2026-03-09 00:54:13.046747 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:54:13.046755 | orchestrator | 2026-03-09 00:54:13.046764 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-09 00:54:13.046772 | orchestrator | Monday 09 March 2026 00:51:36 +0000 (0:00:00.782) 0:02:55.980 ********** 2026-03-09 00:54:13.046780 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.046788 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.046796 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.046804 | orchestrator | 2026-03-09 00:54:13.046812 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-09 00:54:13.046820 | orchestrator | Monday 09 March 2026 00:51:37 +0000 (0:00:00.334) 0:02:56.314 ********** 2026-03-09 00:54:13.046828 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.046836 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.046844 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.046852 | orchestrator | 2026-03-09 00:54:13.046860 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-09 00:54:13.046881 | orchestrator | Monday 09 March 2026 00:51:37 +0000 (0:00:00.359) 0:02:56.673 ********** 2026-03-09 00:54:13.046895 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.046908 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.046928 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.046940 | orchestrator | 2026-03-09 00:54:13.046953 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-09 00:54:13.046966 | orchestrator | Monday 09 March 2026 00:51:37 +0000 (0:00:00.357) 0:02:57.031 ********** 2026-03-09 00:54:13.046979 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:54:13.046991 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:54:13.047004 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:54:13.047017 | orchestrator | 2026-03-09 00:54:13.047030 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-09 00:54:13.047045 | orchestrator | Monday 09 March 2026 00:51:38 +0000 (0:00:01.029) 0:02:58.060 ********** 2026-03-09 00:54:13.047057 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:54:13.047071 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:54:13.047110 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:54:13.047119 | orchestrator | 2026-03-09 00:54:13.047128 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-09 00:54:13.047136 | orchestrator | Monday 09 March 2026 00:51:40 +0000 (0:00:01.206) 0:02:59.266 ********** 2026-03-09 00:54:13.047144 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:54:13.047152 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:54:13.047160 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:54:13.047168 | orchestrator | 2026-03-09 00:54:13.047176 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-09 00:54:13.047184 | orchestrator | Monday 09 March 2026 00:51:41 +0000 (0:00:01.448) 0:03:00.714 ********** 2026-03-09 00:54:13.047191 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:54:13.047199 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:54:13.047207 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:54:13.047215 | orchestrator | 2026-03-09 00:54:13.047223 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-09 00:54:13.047231 | orchestrator | 2026-03-09 00:54:13.047239 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-09 00:54:13.047247 | orchestrator | Monday 09 March 2026 00:51:53 +0000 (0:00:12.200) 0:03:12.915 ********** 2026-03-09 00:54:13.047255 | orchestrator | ok: [testbed-manager] 2026-03-09 00:54:13.047262 | orchestrator | 2026-03-09 00:54:13.047272 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-09 00:54:13.047296 | orchestrator | Monday 09 March 2026 00:51:54 +0000 (0:00:00.923) 0:03:13.839 ********** 2026-03-09 00:54:13.047310 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:13.047323 | orchestrator | 2026-03-09 00:54:13.047336 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-09 00:54:13.047349 | orchestrator | Monday 09 March 2026 00:51:55 +0000 (0:00:00.544) 0:03:14.383 ********** 2026-03-09 00:54:13.047361 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-09 00:54:13.047373 | orchestrator | 2026-03-09 00:54:13.047386 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-09 00:54:13.047399 | orchestrator | Monday 09 March 2026 00:51:55 +0000 (0:00:00.766) 0:03:15.150 ********** 2026-03-09 00:54:13.047411 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:13.047425 | orchestrator | 2026-03-09 00:54:13.047438 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-09 00:54:13.047453 | orchestrator | Monday 09 March 2026 00:51:57 +0000 (0:00:01.144) 0:03:16.295 ********** 2026-03-09 00:54:13.047468 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:13.047482 | orchestrator | 2026-03-09 00:54:13.047496 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-09 00:54:13.047509 | orchestrator | Monday 09 March 2026 00:51:57 +0000 (0:00:00.755) 0:03:17.051 ********** 2026-03-09 00:54:13.047530 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-09 00:54:13.047545 | orchestrator | 2026-03-09 00:54:13.047559 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-09 00:54:13.047573 | orchestrator | Monday 09 March 2026 00:52:00 +0000 (0:00:02.418) 0:03:19.469 ********** 2026-03-09 00:54:13.047587 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-09 00:54:13.047601 | orchestrator | 2026-03-09 00:54:13.047616 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-09 00:54:13.047629 | orchestrator | Monday 09 March 2026 00:52:01 +0000 (0:00:01.206) 0:03:20.676 ********** 2026-03-09 00:54:13.047642 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:13.047651 | orchestrator | 2026-03-09 00:54:13.047658 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-09 00:54:13.047666 | orchestrator | Monday 09 March 2026 00:52:02 +0000 (0:00:01.101) 0:03:21.777 ********** 2026-03-09 00:54:13.047674 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:13.047682 | orchestrator | 2026-03-09 00:54:13.047690 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-09 00:54:13.047698 | orchestrator | 2026-03-09 00:54:13.047706 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-09 00:54:13.047714 | orchestrator | Monday 09 March 2026 00:52:03 +0000 (0:00:00.654) 0:03:22.431 ********** 2026-03-09 00:54:13.047722 | orchestrator | ok: [testbed-manager] 2026-03-09 00:54:13.047729 | orchestrator | 2026-03-09 00:54:13.047737 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-09 00:54:13.047745 | orchestrator | Monday 09 March 2026 00:52:03 +0000 (0:00:00.188) 0:03:22.619 ********** 2026-03-09 00:54:13.047753 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:54:13.047761 | orchestrator | 2026-03-09 00:54:13.047769 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-09 00:54:13.047777 | orchestrator | Monday 09 March 2026 00:52:03 +0000 (0:00:00.290) 0:03:22.910 ********** 2026-03-09 00:54:13.047788 | orchestrator | ok: [testbed-manager] 2026-03-09 00:54:13.047800 | orchestrator | 2026-03-09 00:54:13.047821 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-09 00:54:13.047835 | orchestrator | Monday 09 March 2026 00:52:04 +0000 (0:00:01.241) 0:03:24.151 ********** 2026-03-09 00:54:13.047858 | orchestrator | ok: [testbed-manager] 2026-03-09 00:54:13.047871 | orchestrator | 2026-03-09 00:54:13.047886 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-09 00:54:13.047909 | orchestrator | Monday 09 March 2026 00:52:07 +0000 (0:00:02.297) 0:03:26.448 ********** 2026-03-09 00:54:13.047919 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:13.047927 | orchestrator | 2026-03-09 00:54:13.047935 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-09 00:54:13.047943 | orchestrator | Monday 09 March 2026 00:52:08 +0000 (0:00:00.835) 0:03:27.284 ********** 2026-03-09 00:54:13.047951 | orchestrator | ok: [testbed-manager] 2026-03-09 00:54:13.047959 | orchestrator | 2026-03-09 00:54:13.047967 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-09 00:54:13.047975 | orchestrator | Monday 09 March 2026 00:52:08 +0000 (0:00:00.694) 0:03:27.979 ********** 2026-03-09 00:54:13.047983 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:13.047992 | orchestrator | 2026-03-09 00:54:13.048000 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-09 00:54:13.048008 | orchestrator | Monday 09 March 2026 00:52:18 +0000 (0:00:09.698) 0:03:37.678 ********** 2026-03-09 00:54:13.048016 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:13.048024 | orchestrator | 2026-03-09 00:54:13.048032 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-09 00:54:13.048040 | orchestrator | Monday 09 March 2026 00:52:37 +0000 (0:00:18.936) 0:03:56.614 ********** 2026-03-09 00:54:13.048048 | orchestrator | ok: [testbed-manager] 2026-03-09 00:54:13.048056 | orchestrator | 2026-03-09 00:54:13.048064 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-09 00:54:13.048072 | orchestrator | 2026-03-09 00:54:13.048135 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-09 00:54:13.048146 | orchestrator | Monday 09 March 2026 00:52:37 +0000 (0:00:00.522) 0:03:57.136 ********** 2026-03-09 00:54:13.048155 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.048162 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.048170 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.048178 | orchestrator | 2026-03-09 00:54:13.048187 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-09 00:54:13.048195 | orchestrator | Monday 09 March 2026 00:52:38 +0000 (0:00:00.433) 0:03:57.570 ********** 2026-03-09 00:54:13.048203 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.048211 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.048219 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.048227 | orchestrator | 2026-03-09 00:54:13.048235 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-09 00:54:13.048243 | orchestrator | Monday 09 March 2026 00:52:38 +0000 (0:00:00.528) 0:03:58.099 ********** 2026-03-09 00:54:13.048251 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-09 00:54:13.048259 | orchestrator | 2026-03-09 00:54:13.048267 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-09 00:54:13.048274 | orchestrator | Monday 09 March 2026 00:52:39 +0000 (0:00:00.760) 0:03:58.859 ********** 2026-03-09 00:54:13.048282 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-09 00:54:13.048290 | orchestrator | 2026-03-09 00:54:13.048298 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-09 00:54:13.048306 | orchestrator | Monday 09 March 2026 00:52:40 +0000 (0:00:01.058) 0:03:59.918 ********** 2026-03-09 00:54:13.048314 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 00:54:13.048322 | orchestrator | 2026-03-09 00:54:13.048330 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-09 00:54:13.048338 | orchestrator | Monday 09 March 2026 00:52:41 +0000 (0:00:00.995) 0:04:00.913 ********** 2026-03-09 00:54:13.048346 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.048360 | orchestrator | 2026-03-09 00:54:13.048369 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-09 00:54:13.048377 | orchestrator | Monday 09 March 2026 00:52:41 +0000 (0:00:00.139) 0:04:01.053 ********** 2026-03-09 00:54:13.048390 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 00:54:13.048398 | orchestrator | 2026-03-09 00:54:13.048406 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-09 00:54:13.048415 | orchestrator | Monday 09 March 2026 00:52:43 +0000 (0:00:01.335) 0:04:02.389 ********** 2026-03-09 00:54:13.048423 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.048431 | orchestrator | 2026-03-09 00:54:13.048439 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-09 00:54:13.048447 | orchestrator | Monday 09 March 2026 00:52:43 +0000 (0:00:00.212) 0:04:02.602 ********** 2026-03-09 00:54:13.048455 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.048463 | orchestrator | 2026-03-09 00:54:13.048471 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-09 00:54:13.048479 | orchestrator | Monday 09 March 2026 00:52:43 +0000 (0:00:00.187) 0:04:02.789 ********** 2026-03-09 00:54:13.048487 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.048495 | orchestrator | 2026-03-09 00:54:13.048503 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-09 00:54:13.048511 | orchestrator | Monday 09 March 2026 00:52:43 +0000 (0:00:00.117) 0:04:02.907 ********** 2026-03-09 00:54:13.048519 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.048527 | orchestrator | 2026-03-09 00:54:13.048535 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-09 00:54:13.048542 | orchestrator | Monday 09 March 2026 00:52:44 +0000 (0:00:00.507) 0:04:03.415 ********** 2026-03-09 00:54:13.048551 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-09 00:54:13.048559 | orchestrator | 2026-03-09 00:54:13.048567 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-09 00:54:13.048575 | orchestrator | Monday 09 March 2026 00:52:50 +0000 (0:00:06.532) 0:04:09.948 ********** 2026-03-09 00:54:13.048583 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-09 00:54:13.048597 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-09 00:54:13.048606 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-09 00:54:13.048619 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-09 00:54:13.048636 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-09 00:54:13.048650 | orchestrator | 2026-03-09 00:54:13.048661 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-09 00:54:13.048672 | orchestrator | Monday 09 March 2026 00:53:33 +0000 (0:00:42.614) 0:04:52.562 ********** 2026-03-09 00:54:13.048683 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 00:54:13.048694 | orchestrator | 2026-03-09 00:54:13.048705 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-09 00:54:13.048716 | orchestrator | Monday 09 March 2026 00:53:34 +0000 (0:00:01.377) 0:04:53.940 ********** 2026-03-09 00:54:13.048728 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-09 00:54:13.048739 | orchestrator | 2026-03-09 00:54:13.048750 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-09 00:54:13.048760 | orchestrator | Monday 09 March 2026 00:53:36 +0000 (0:00:02.040) 0:04:55.981 ********** 2026-03-09 00:54:13.048770 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-09 00:54:13.048781 | orchestrator | 2026-03-09 00:54:13.048791 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-09 00:54:13.048802 | orchestrator | Monday 09 March 2026 00:53:38 +0000 (0:00:01.253) 0:04:57.234 ********** 2026-03-09 00:54:13.048813 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.048825 | orchestrator | 2026-03-09 00:54:13.048837 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-09 00:54:13.048849 | orchestrator | Monday 09 March 2026 00:53:38 +0000 (0:00:00.146) 0:04:57.381 ********** 2026-03-09 00:54:13.048861 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-09 00:54:13.048883 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-09 00:54:13.048895 | orchestrator | 2026-03-09 00:54:13.048903 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-09 00:54:13.048909 | orchestrator | Monday 09 March 2026 00:53:40 +0000 (0:00:02.363) 0:04:59.745 ********** 2026-03-09 00:54:13.048916 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.048923 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.048929 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.048936 | orchestrator | 2026-03-09 00:54:13.048943 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-09 00:54:13.048949 | orchestrator | Monday 09 March 2026 00:53:41 +0000 (0:00:00.429) 0:05:00.174 ********** 2026-03-09 00:54:13.048956 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.048963 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.048970 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.048976 | orchestrator | 2026-03-09 00:54:13.048983 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-09 00:54:13.048989 | orchestrator | 2026-03-09 00:54:13.048996 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-09 00:54:13.049003 | orchestrator | Monday 09 March 2026 00:53:42 +0000 (0:00:01.293) 0:05:01.468 ********** 2026-03-09 00:54:13.049009 | orchestrator | ok: [testbed-manager] 2026-03-09 00:54:13.049016 | orchestrator | 2026-03-09 00:54:13.049022 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-09 00:54:13.049029 | orchestrator | Monday 09 March 2026 00:53:42 +0000 (0:00:00.171) 0:05:01.639 ********** 2026-03-09 00:54:13.049036 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:54:13.049042 | orchestrator | 2026-03-09 00:54:13.049054 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-09 00:54:13.049061 | orchestrator | Monday 09 March 2026 00:53:42 +0000 (0:00:00.257) 0:05:01.896 ********** 2026-03-09 00:54:13.049067 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:13.049074 | orchestrator | 2026-03-09 00:54:13.049100 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-09 00:54:13.049108 | orchestrator | 2026-03-09 00:54:13.049115 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-09 00:54:13.049122 | orchestrator | Monday 09 March 2026 00:53:49 +0000 (0:00:07.123) 0:05:09.019 ********** 2026-03-09 00:54:13.049128 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:54:13.049135 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:54:13.049142 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:54:13.049148 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:13.049155 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:13.049161 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:13.049168 | orchestrator | 2026-03-09 00:54:13.049175 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-09 00:54:13.049182 | orchestrator | Monday 09 March 2026 00:53:52 +0000 (0:00:02.248) 0:05:11.268 ********** 2026-03-09 00:54:13.049188 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-09 00:54:13.049195 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-09 00:54:13.049202 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-09 00:54:13.049208 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-09 00:54:13.049215 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-09 00:54:13.049222 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-09 00:54:13.049228 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-09 00:54:13.049240 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-09 00:54:13.049255 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-09 00:54:13.049262 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-09 00:54:13.049268 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-09 00:54:13.049275 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-09 00:54:13.049282 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-09 00:54:13.049289 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-09 00:54:13.049296 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-09 00:54:13.049302 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-09 00:54:13.049309 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-09 00:54:13.049315 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-09 00:54:13.049322 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-09 00:54:13.049329 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-09 00:54:13.049335 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-09 00:54:13.049342 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-09 00:54:13.049348 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-09 00:54:13.049355 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-09 00:54:13.049362 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-09 00:54:13.049368 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-09 00:54:13.049377 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-09 00:54:13.049389 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-09 00:54:13.049406 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-09 00:54:13.049418 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-09 00:54:13.049429 | orchestrator | 2026-03-09 00:54:13.049439 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-09 00:54:13.049449 | orchestrator | Monday 09 March 2026 00:54:09 +0000 (0:00:17.444) 0:05:28.713 ********** 2026-03-09 00:54:13.049459 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.049469 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.049481 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.049493 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.049504 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.049514 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.049526 | orchestrator | 2026-03-09 00:54:13.049538 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-09 00:54:13.049549 | orchestrator | Monday 09 March 2026 00:54:10 +0000 (0:00:00.787) 0:05:29.500 ********** 2026-03-09 00:54:13.049561 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:54:13.049568 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:54:13.049574 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:54:13.049581 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:13.049587 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:13.049594 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:13.049601 | orchestrator | 2026-03-09 00:54:13.049614 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:54:13.049621 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:54:13.049631 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-09 00:54:13.049638 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-09 00:54:13.049645 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-09 00:54:13.049651 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-09 00:54:13.049721 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-09 00:54:13.049738 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-09 00:54:13.049745 | orchestrator | 2026-03-09 00:54:13.049752 | orchestrator | 2026-03-09 00:54:13.049759 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:54:13.049773 | orchestrator | Monday 09 March 2026 00:54:10 +0000 (0:00:00.503) 0:05:30.003 ********** 2026-03-09 00:54:13.049780 | orchestrator | =============================================================================== 2026-03-09 00:54:13.049787 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.86s 2026-03-09 00:54:13.049794 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.61s 2026-03-09 00:54:13.049801 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.19s 2026-03-09 00:54:13.049807 | orchestrator | kubectl : Install required packages ------------------------------------ 18.94s 2026-03-09 00:54:13.049814 | orchestrator | Manage labels ---------------------------------------------------------- 17.44s 2026-03-09 00:54:13.049821 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.20s 2026-03-09 00:54:13.049827 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.70s 2026-03-09 00:54:13.049834 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 7.12s 2026-03-09 00:54:13.049841 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 7.08s 2026-03-09 00:54:13.049847 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 7.06s 2026-03-09 00:54:13.049854 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.53s 2026-03-09 00:54:13.049861 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 3.63s 2026-03-09 00:54:13.049868 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 3.58s 2026-03-09 00:54:13.049875 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 3.38s 2026-03-09 00:54:13.049881 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 3.32s 2026-03-09 00:54:13.049888 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 3.10s 2026-03-09 00:54:13.049894 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.94s 2026-03-09 00:54:13.049901 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.84s 2026-03-09 00:54:13.049908 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.84s 2026-03-09 00:54:13.049914 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.80s 2026-03-09 00:54:13.049927 | orchestrator | 2026-03-09 00:54:13 | INFO  | Task 32ef8487-f5e0-459d-a80f-d21d22a0557a is in state STARTED 2026-03-09 00:54:13.049933 | orchestrator | 2026-03-09 00:54:13 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:13.049940 | orchestrator | 2026-03-09 00:54:13 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:13.049947 | orchestrator | 2026-03-09 00:54:13 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:54:13.049953 | orchestrator | 2026-03-09 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:16.085913 | orchestrator | 2026-03-09 00:54:16 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:16.089642 | orchestrator | 2026-03-09 00:54:16 | INFO  | Task cdb29435-ac14-45e6-84fd-dc9caf95b4b0 is in state STARTED 2026-03-09 00:54:16.093127 | orchestrator | 2026-03-09 00:54:16 | INFO  | Task 32ef8487-f5e0-459d-a80f-d21d22a0557a is in state STARTED 2026-03-09 00:54:16.095009 | orchestrator | 2026-03-09 00:54:16 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:16.096452 | orchestrator | 2026-03-09 00:54:16 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:16.099477 | orchestrator | 2026-03-09 00:54:16 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:54:16.100164 | orchestrator | 2026-03-09 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:19.136718 | orchestrator | 2026-03-09 00:54:19 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:19.137394 | orchestrator | 2026-03-09 00:54:19 | INFO  | Task cdb29435-ac14-45e6-84fd-dc9caf95b4b0 is in state STARTED 2026-03-09 00:54:19.138409 | orchestrator | 2026-03-09 00:54:19 | INFO  | Task 32ef8487-f5e0-459d-a80f-d21d22a0557a is in state STARTED 2026-03-09 00:54:19.140006 | orchestrator | 2026-03-09 00:54:19 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:19.141821 | orchestrator | 2026-03-09 00:54:19 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:19.142553 | orchestrator | 2026-03-09 00:54:19 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:54:19.142593 | orchestrator | 2026-03-09 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:22.181914 | orchestrator | 2026-03-09 00:54:22 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:22.181987 | orchestrator | 2026-03-09 00:54:22 | INFO  | Task cdb29435-ac14-45e6-84fd-dc9caf95b4b0 is in state STARTED 2026-03-09 00:54:22.181993 | orchestrator | 2026-03-09 00:54:22 | INFO  | Task 32ef8487-f5e0-459d-a80f-d21d22a0557a is in state SUCCESS 2026-03-09 00:54:22.184647 | orchestrator | 2026-03-09 00:54:22 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:22.188632 | orchestrator | 2026-03-09 00:54:22 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:22.188687 | orchestrator | 2026-03-09 00:54:22 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:54:22.188696 | orchestrator | 2026-03-09 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:25.221877 | orchestrator | 2026-03-09 00:54:25 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:25.221960 | orchestrator | 2026-03-09 00:54:25 | INFO  | Task cdb29435-ac14-45e6-84fd-dc9caf95b4b0 is in state STARTED 2026-03-09 00:54:25.222167 | orchestrator | 2026-03-09 00:54:25 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:25.222883 | orchestrator | 2026-03-09 00:54:25 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:25.223794 | orchestrator | 2026-03-09 00:54:25 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:54:25.223816 | orchestrator | 2026-03-09 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:28.268734 | orchestrator | 2026-03-09 00:54:28 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:28.268797 | orchestrator | 2026-03-09 00:54:28 | INFO  | Task cdb29435-ac14-45e6-84fd-dc9caf95b4b0 is in state SUCCESS 2026-03-09 00:54:28.268809 | orchestrator | 2026-03-09 00:54:28 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:28.271257 | orchestrator | 2026-03-09 00:54:28 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:28.271990 | orchestrator | 2026-03-09 00:54:28 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:54:28.272030 | orchestrator | 2026-03-09 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:31.316052 | orchestrator | 2026-03-09 00:54:31 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:31.317087 | orchestrator | 2026-03-09 00:54:31 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:31.318307 | orchestrator | 2026-03-09 00:54:31 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:31.319113 | orchestrator | 2026-03-09 00:54:31 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:54:31.319458 | orchestrator | 2026-03-09 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:34.353538 | orchestrator | 2026-03-09 00:54:34 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:34.353964 | orchestrator | 2026-03-09 00:54:34 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:34.354562 | orchestrator | 2026-03-09 00:54:34 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:34.356349 | orchestrator | 2026-03-09 00:54:34 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:54:34.356415 | orchestrator | 2026-03-09 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:37.401375 | orchestrator | 2026-03-09 00:54:37 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:37.401471 | orchestrator | 2026-03-09 00:54:37 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:37.402112 | orchestrator | 2026-03-09 00:54:37 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:37.402892 | orchestrator | 2026-03-09 00:54:37 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:54:37.402936 | orchestrator | 2026-03-09 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:40.462005 | orchestrator | 2026-03-09 00:54:40 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:40.467954 | orchestrator | 2026-03-09 00:54:40 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:40.471192 | orchestrator | 2026-03-09 00:54:40 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:40.474507 | orchestrator | 2026-03-09 00:54:40 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state STARTED 2026-03-09 00:54:40.474599 | orchestrator | 2026-03-09 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:43.513157 | orchestrator | 2026-03-09 00:54:43 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:43.513669 | orchestrator | 2026-03-09 00:54:43 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:43.514483 | orchestrator | 2026-03-09 00:54:43 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:43.516080 | orchestrator | 2026-03-09 00:54:43 | INFO  | Task 1d5e9a70-321e-4b1b-9e07-d4f2ab7b92d2 is in state SUCCESS 2026-03-09 00:54:43.517264 | orchestrator | 2026-03-09 00:54:43.517307 | orchestrator | 2026-03-09 00:54:43.517320 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-09 00:54:43.517333 | orchestrator | 2026-03-09 00:54:43.517344 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-09 00:54:43.517356 | orchestrator | Monday 09 March 2026 00:54:16 +0000 (0:00:00.181) 0:00:00.181 ********** 2026-03-09 00:54:43.517368 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-09 00:54:43.517379 | orchestrator | 2026-03-09 00:54:43.517390 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-09 00:54:43.517402 | orchestrator | Monday 09 March 2026 00:54:17 +0000 (0:00:00.905) 0:00:01.087 ********** 2026-03-09 00:54:43.517415 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:43.517427 | orchestrator | 2026-03-09 00:54:43.517438 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-09 00:54:43.517449 | orchestrator | Monday 09 March 2026 00:54:19 +0000 (0:00:02.266) 0:00:03.353 ********** 2026-03-09 00:54:43.517461 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:43.517472 | orchestrator | 2026-03-09 00:54:43.517483 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:54:43.517495 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:54:43.517508 | orchestrator | 2026-03-09 00:54:43.517519 | orchestrator | 2026-03-09 00:54:43.517530 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:54:43.517541 | orchestrator | Monday 09 March 2026 00:54:20 +0000 (0:00:00.656) 0:00:04.010 ********** 2026-03-09 00:54:43.517552 | orchestrator | =============================================================================== 2026-03-09 00:54:43.517563 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.27s 2026-03-09 00:54:43.517574 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.91s 2026-03-09 00:54:43.517621 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.66s 2026-03-09 00:54:43.517633 | orchestrator | 2026-03-09 00:54:43.517644 | orchestrator | 2026-03-09 00:54:43.517655 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-09 00:54:43.517667 | orchestrator | 2026-03-09 00:54:43.517678 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-09 00:54:43.517689 | orchestrator | Monday 09 March 2026 00:54:16 +0000 (0:00:00.193) 0:00:00.193 ********** 2026-03-09 00:54:43.517700 | orchestrator | ok: [testbed-manager] 2026-03-09 00:54:43.517712 | orchestrator | 2026-03-09 00:54:43.517741 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-09 00:54:43.517752 | orchestrator | Monday 09 March 2026 00:54:17 +0000 (0:00:00.661) 0:00:00.854 ********** 2026-03-09 00:54:43.517763 | orchestrator | ok: [testbed-manager] 2026-03-09 00:54:43.517775 | orchestrator | 2026-03-09 00:54:43.517786 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-09 00:54:43.517797 | orchestrator | Monday 09 March 2026 00:54:17 +0000 (0:00:00.684) 0:00:01.539 ********** 2026-03-09 00:54:43.517808 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-09 00:54:43.517846 | orchestrator | 2026-03-09 00:54:43.517858 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-09 00:54:43.517869 | orchestrator | Monday 09 March 2026 00:54:18 +0000 (0:00:00.802) 0:00:02.341 ********** 2026-03-09 00:54:43.517880 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:43.517891 | orchestrator | 2026-03-09 00:54:43.517902 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-09 00:54:43.517913 | orchestrator | Monday 09 March 2026 00:54:21 +0000 (0:00:02.915) 0:00:05.257 ********** 2026-03-09 00:54:43.517924 | orchestrator | changed: [testbed-manager] 2026-03-09 00:54:43.517935 | orchestrator | 2026-03-09 00:54:43.517946 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-09 00:54:43.517957 | orchestrator | Monday 09 March 2026 00:54:22 +0000 (0:00:00.779) 0:00:06.036 ********** 2026-03-09 00:54:43.517967 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-09 00:54:43.517979 | orchestrator | 2026-03-09 00:54:43.517990 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-09 00:54:43.518000 | orchestrator | Monday 09 March 2026 00:54:24 +0000 (0:00:01.956) 0:00:07.992 ********** 2026-03-09 00:54:43.518011 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-09 00:54:43.518189 | orchestrator | 2026-03-09 00:54:43.518202 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-09 00:54:43.518213 | orchestrator | Monday 09 March 2026 00:54:25 +0000 (0:00:01.110) 0:00:09.102 ********** 2026-03-09 00:54:43.518224 | orchestrator | ok: [testbed-manager] 2026-03-09 00:54:43.518235 | orchestrator | 2026-03-09 00:54:43.518247 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-09 00:54:43.518258 | orchestrator | Monday 09 March 2026 00:54:25 +0000 (0:00:00.523) 0:00:09.625 ********** 2026-03-09 00:54:43.518268 | orchestrator | ok: [testbed-manager] 2026-03-09 00:54:43.518279 | orchestrator | 2026-03-09 00:54:43.518290 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:54:43.518302 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:54:43.518313 | orchestrator | 2026-03-09 00:54:43.518323 | orchestrator | 2026-03-09 00:54:43.518334 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:54:43.518345 | orchestrator | Monday 09 March 2026 00:54:26 +0000 (0:00:00.434) 0:00:10.060 ********** 2026-03-09 00:54:43.518356 | orchestrator | =============================================================================== 2026-03-09 00:54:43.518367 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.92s 2026-03-09 00:54:43.518378 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.96s 2026-03-09 00:54:43.518389 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.11s 2026-03-09 00:54:43.518416 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2026-03-09 00:54:43.518428 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.78s 2026-03-09 00:54:43.518439 | orchestrator | Create .kube directory -------------------------------------------------- 0.68s 2026-03-09 00:54:43.518450 | orchestrator | Get home directory of operator user ------------------------------------- 0.66s 2026-03-09 00:54:43.518460 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.52s 2026-03-09 00:54:43.518470 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.43s 2026-03-09 00:54:43.518480 | orchestrator | 2026-03-09 00:54:43.518489 | orchestrator | 2026-03-09 00:54:43.518499 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-09 00:54:43.518509 | orchestrator | 2026-03-09 00:54:43.518519 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-09 00:54:43.518528 | orchestrator | Monday 09 March 2026 00:52:07 +0000 (0:00:00.134) 0:00:00.134 ********** 2026-03-09 00:54:43.518538 | orchestrator | ok: [localhost] => { 2026-03-09 00:54:43.518559 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-09 00:54:43.518569 | orchestrator | } 2026-03-09 00:54:43.518579 | orchestrator | 2026-03-09 00:54:43.518589 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-09 00:54:43.518598 | orchestrator | Monday 09 March 2026 00:52:07 +0000 (0:00:00.063) 0:00:00.197 ********** 2026-03-09 00:54:43.518609 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-09 00:54:43.518621 | orchestrator | ...ignoring 2026-03-09 00:54:43.518631 | orchestrator | 2026-03-09 00:54:43.518641 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-09 00:54:43.518651 | orchestrator | Monday 09 March 2026 00:52:11 +0000 (0:00:03.312) 0:00:03.510 ********** 2026-03-09 00:54:43.518661 | orchestrator | skipping: [localhost] 2026-03-09 00:54:43.518671 | orchestrator | 2026-03-09 00:54:43.518680 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-09 00:54:43.518690 | orchestrator | Monday 09 March 2026 00:52:11 +0000 (0:00:00.116) 0:00:03.626 ********** 2026-03-09 00:54:43.518700 | orchestrator | ok: [localhost] 2026-03-09 00:54:43.518710 | orchestrator | 2026-03-09 00:54:43.518720 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:54:43.518730 | orchestrator | 2026-03-09 00:54:43.518745 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:54:43.518755 | orchestrator | Monday 09 March 2026 00:52:11 +0000 (0:00:00.528) 0:00:04.155 ********** 2026-03-09 00:54:43.518765 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:43.518775 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:43.518785 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:43.518795 | orchestrator | 2026-03-09 00:54:43.518804 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:54:43.518814 | orchestrator | Monday 09 March 2026 00:52:12 +0000 (0:00:01.100) 0:00:05.256 ********** 2026-03-09 00:54:43.518824 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-09 00:54:43.518834 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-09 00:54:43.518844 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-09 00:54:43.518853 | orchestrator | 2026-03-09 00:54:43.518863 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-09 00:54:43.518873 | orchestrator | 2026-03-09 00:54:43.518883 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-09 00:54:43.518892 | orchestrator | Monday 09 March 2026 00:52:14 +0000 (0:00:01.158) 0:00:06.415 ********** 2026-03-09 00:54:43.518902 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:54:43.518912 | orchestrator | 2026-03-09 00:54:43.518922 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-09 00:54:43.518932 | orchestrator | Monday 09 March 2026 00:52:14 +0000 (0:00:00.697) 0:00:07.112 ********** 2026-03-09 00:54:43.518941 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:43.518951 | orchestrator | 2026-03-09 00:54:43.518961 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-09 00:54:43.518971 | orchestrator | Monday 09 March 2026 00:52:15 +0000 (0:00:01.030) 0:00:08.142 ********** 2026-03-09 00:54:43.518980 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:43.518990 | orchestrator | 2026-03-09 00:54:43.519000 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-09 00:54:43.519009 | orchestrator | Monday 09 March 2026 00:52:16 +0000 (0:00:00.344) 0:00:08.487 ********** 2026-03-09 00:54:43.519019 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:43.519028 | orchestrator | 2026-03-09 00:54:43.519038 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-09 00:54:43.519065 | orchestrator | Monday 09 March 2026 00:52:16 +0000 (0:00:00.413) 0:00:08.900 ********** 2026-03-09 00:54:43.519082 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:43.519092 | orchestrator | 2026-03-09 00:54:43.519102 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-09 00:54:43.519112 | orchestrator | Monday 09 March 2026 00:52:17 +0000 (0:00:00.436) 0:00:09.337 ********** 2026-03-09 00:54:43.519121 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:43.519131 | orchestrator | 2026-03-09 00:54:43.519141 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-09 00:54:43.519150 | orchestrator | Monday 09 March 2026 00:52:18 +0000 (0:00:00.965) 0:00:10.303 ********** 2026-03-09 00:54:43.519160 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:54:43.519170 | orchestrator | 2026-03-09 00:54:43.519180 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-09 00:54:43.519196 | orchestrator | Monday 09 March 2026 00:52:18 +0000 (0:00:00.836) 0:00:11.140 ********** 2026-03-09 00:54:43.519206 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:43.519216 | orchestrator | 2026-03-09 00:54:43.519226 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-09 00:54:43.519236 | orchestrator | Monday 09 March 2026 00:52:19 +0000 (0:00:00.841) 0:00:11.981 ********** 2026-03-09 00:54:43.519245 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:43.519255 | orchestrator | 2026-03-09 00:54:43.519265 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-09 00:54:43.519275 | orchestrator | Monday 09 March 2026 00:52:20 +0000 (0:00:00.418) 0:00:12.399 ********** 2026-03-09 00:54:43.519284 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:43.519294 | orchestrator | 2026-03-09 00:54:43.519304 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-09 00:54:43.519314 | orchestrator | Monday 09 March 2026 00:52:20 +0000 (0:00:00.484) 0:00:12.884 ********** 2026-03-09 00:54:43.519329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:43.519350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:43.519368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:43.519379 | orchestrator | 2026-03-09 00:54:43.519389 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-09 00:54:43.519399 | orchestrator | Monday 09 March 2026 00:52:22 +0000 (0:00:01.698) 0:00:14.582 ********** 2026-03-09 00:54:43.519417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:43.519434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:43.519446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:43.519462 | orchestrator | 2026-03-09 00:54:43.519472 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-09 00:54:43.519482 | orchestrator | Monday 09 March 2026 00:52:25 +0000 (0:00:03.647) 0:00:18.230 ********** 2026-03-09 00:54:43.519492 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-09 00:54:43.519501 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-09 00:54:43.519511 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-09 00:54:43.519521 | orchestrator | 2026-03-09 00:54:43.519531 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-09 00:54:43.519541 | orchestrator | Monday 09 March 2026 00:52:28 +0000 (0:00:02.599) 0:00:20.830 ********** 2026-03-09 00:54:43.519550 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-09 00:54:43.519560 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-09 00:54:43.519570 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-09 00:54:43.519579 | orchestrator | 2026-03-09 00:54:43.519594 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-09 00:54:43.519604 | orchestrator | Monday 09 March 2026 00:52:30 +0000 (0:00:02.375) 0:00:23.205 ********** 2026-03-09 00:54:43.519614 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-09 00:54:43.519623 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-09 00:54:43.519633 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-09 00:54:43.519643 | orchestrator | 2026-03-09 00:54:43.519652 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-09 00:54:43.519662 | orchestrator | Monday 09 March 2026 00:52:33 +0000 (0:00:02.439) 0:00:25.645 ********** 2026-03-09 00:54:43.519672 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-09 00:54:43.519681 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-09 00:54:43.519691 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-09 00:54:43.519701 | orchestrator | 2026-03-09 00:54:43.519710 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-09 00:54:43.519720 | orchestrator | Monday 09 March 2026 00:52:38 +0000 (0:00:05.187) 0:00:30.832 ********** 2026-03-09 00:54:43.519730 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-09 00:54:43.519739 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-09 00:54:43.519749 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-09 00:54:43.519758 | orchestrator | 2026-03-09 00:54:43.519768 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-09 00:54:43.519778 | orchestrator | Monday 09 March 2026 00:52:41 +0000 (0:00:02.753) 0:00:33.585 ********** 2026-03-09 00:54:43.519794 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-09 00:54:43.519804 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-09 00:54:43.519818 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-09 00:54:43.519828 | orchestrator | 2026-03-09 00:54:43.519838 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-09 00:54:43.519848 | orchestrator | Monday 09 March 2026 00:52:43 +0000 (0:00:02.590) 0:00:36.175 ********** 2026-03-09 00:54:43.519858 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:43.519867 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:43.519877 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:43.519887 | orchestrator | 2026-03-09 00:54:43.519897 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-09 00:54:43.519906 | orchestrator | Monday 09 March 2026 00:52:44 +0000 (0:00:00.663) 0:00:36.839 ********** 2026-03-09 00:54:43.519917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:43.519936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:43.519948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:54:43.519964 | orchestrator | 2026-03-09 00:54:43.519974 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-09 00:54:43.519984 | orchestrator | Monday 09 March 2026 00:52:47 +0000 (0:00:02.891) 0:00:39.730 ********** 2026-03-09 00:54:43.519994 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:43.520003 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:43.520013 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:43.520023 | orchestrator | 2026-03-09 00:54:43.520033 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-09 00:54:43.520065 | orchestrator | Monday 09 March 2026 00:52:48 +0000 (0:00:01.113) 0:00:40.844 ********** 2026-03-09 00:54:43.520076 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:43.520086 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:43.520096 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:43.520106 | orchestrator | 2026-03-09 00:54:43.520116 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-09 00:54:43.520125 | orchestrator | Monday 09 March 2026 00:52:56 +0000 (0:00:07.594) 0:00:48.439 ********** 2026-03-09 00:54:43.520135 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:43.520145 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:43.520155 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:43.520164 | orchestrator | 2026-03-09 00:54:43.520174 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-09 00:54:43.520184 | orchestrator | 2026-03-09 00:54:43.520194 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-09 00:54:43.520203 | orchestrator | Monday 09 March 2026 00:52:56 +0000 (0:00:00.324) 0:00:48.764 ********** 2026-03-09 00:54:43.520213 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:43.520223 | orchestrator | 2026-03-09 00:54:43.520232 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-09 00:54:43.520242 | orchestrator | Monday 09 March 2026 00:52:57 +0000 (0:00:00.702) 0:00:49.466 ********** 2026-03-09 00:54:43.520252 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:54:43.520262 | orchestrator | 2026-03-09 00:54:43.520271 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-09 00:54:43.520281 | orchestrator | Monday 09 March 2026 00:52:57 +0000 (0:00:00.286) 0:00:49.753 ********** 2026-03-09 00:54:43.520291 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:43.520300 | orchestrator | 2026-03-09 00:54:43.520310 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-09 00:54:43.520320 | orchestrator | Monday 09 March 2026 00:52:59 +0000 (0:00:01.893) 0:00:51.646 ********** 2026-03-09 00:54:43.520329 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:54:43.520339 | orchestrator | 2026-03-09 00:54:43.520349 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-09 00:54:43.520359 | orchestrator | 2026-03-09 00:54:43.520368 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-09 00:54:43.520378 | orchestrator | Monday 09 March 2026 00:53:57 +0000 (0:00:57.679) 0:01:49.325 ********** 2026-03-09 00:54:43.520388 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:43.520398 | orchestrator | 2026-03-09 00:54:43.520407 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-09 00:54:43.520417 | orchestrator | Monday 09 March 2026 00:53:57 +0000 (0:00:00.834) 0:01:50.160 ********** 2026-03-09 00:54:43.520427 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:54:43.520437 | orchestrator | 2026-03-09 00:54:43.520446 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-09 00:54:43.520456 | orchestrator | Monday 09 March 2026 00:53:58 +0000 (0:00:00.615) 0:01:50.775 ********** 2026-03-09 00:54:43.520472 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:43.520482 | orchestrator | 2026-03-09 00:54:43.520492 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-09 00:54:43.520502 | orchestrator | Monday 09 March 2026 00:54:00 +0000 (0:00:02.230) 0:01:53.006 ********** 2026-03-09 00:54:43.520512 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:54:43.520521 | orchestrator | 2026-03-09 00:54:43.520531 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-09 00:54:43.520541 | orchestrator | 2026-03-09 00:54:43.520551 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-09 00:54:43.520566 | orchestrator | Monday 09 March 2026 00:54:18 +0000 (0:00:17.794) 0:02:10.801 ********** 2026-03-09 00:54:43.520576 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:43.520586 | orchestrator | 2026-03-09 00:54:43.520596 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-09 00:54:43.520606 | orchestrator | Monday 09 March 2026 00:54:19 +0000 (0:00:00.676) 0:02:11.477 ********** 2026-03-09 00:54:43.520615 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:54:43.520625 | orchestrator | 2026-03-09 00:54:43.520635 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-09 00:54:43.520644 | orchestrator | Monday 09 March 2026 00:54:19 +0000 (0:00:00.348) 0:02:11.826 ********** 2026-03-09 00:54:43.520654 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:43.520664 | orchestrator | 2026-03-09 00:54:43.520674 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-09 00:54:43.520684 | orchestrator | Monday 09 March 2026 00:54:21 +0000 (0:00:02.318) 0:02:14.145 ********** 2026-03-09 00:54:43.520693 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:54:43.520703 | orchestrator | 2026-03-09 00:54:43.520713 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-09 00:54:43.520722 | orchestrator | 2026-03-09 00:54:43.520732 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-09 00:54:43.520742 | orchestrator | Monday 09 March 2026 00:54:39 +0000 (0:00:17.183) 0:02:31.328 ********** 2026-03-09 00:54:43.520752 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:54:43.520761 | orchestrator | 2026-03-09 00:54:43.520771 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-09 00:54:43.520781 | orchestrator | Monday 09 March 2026 00:54:39 +0000 (0:00:00.725) 0:02:32.054 ********** 2026-03-09 00:54:43.520790 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-09 00:54:43.520800 | orchestrator | enable_outward_rabbitmq_True 2026-03-09 00:54:43.520810 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-09 00:54:43.520819 | orchestrator | outward_rabbitmq_restart 2026-03-09 00:54:43.520829 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:54:43.520839 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:54:43.520848 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:54:43.520858 | orchestrator | 2026-03-09 00:54:43.520867 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-09 00:54:43.520877 | orchestrator | skipping: no hosts matched 2026-03-09 00:54:43.520887 | orchestrator | 2026-03-09 00:54:43.520896 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-09 00:54:43.520916 | orchestrator | skipping: no hosts matched 2026-03-09 00:54:43.520926 | orchestrator | 2026-03-09 00:54:43.520936 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-09 00:54:43.520945 | orchestrator | skipping: no hosts matched 2026-03-09 00:54:43.520955 | orchestrator | 2026-03-09 00:54:43.520965 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:54:43.520975 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-09 00:54:43.520985 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-09 00:54:43.521002 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:54:43.521011 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:54:43.521021 | orchestrator | 2026-03-09 00:54:43.521031 | orchestrator | 2026-03-09 00:54:43.521040 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:54:43.521075 | orchestrator | Monday 09 March 2026 00:54:42 +0000 (0:00:02.764) 0:02:34.819 ********** 2026-03-09 00:54:43.521085 | orchestrator | =============================================================================== 2026-03-09 00:54:43.521095 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 92.65s 2026-03-09 00:54:43.521104 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.59s 2026-03-09 00:54:43.521114 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.45s 2026-03-09 00:54:43.521124 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 5.19s 2026-03-09 00:54:43.521134 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.65s 2026-03-09 00:54:43.521143 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.31s 2026-03-09 00:54:43.521153 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.89s 2026-03-09 00:54:43.521163 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.76s 2026-03-09 00:54:43.521172 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.75s 2026-03-09 00:54:43.521182 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.60s 2026-03-09 00:54:43.521192 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.59s 2026-03-09 00:54:43.521202 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.44s 2026-03-09 00:54:43.521211 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.38s 2026-03-09 00:54:43.521221 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.21s 2026-03-09 00:54:43.521231 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.70s 2026-03-09 00:54:43.521245 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.25s 2026-03-09 00:54:43.521255 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.16s 2026-03-09 00:54:43.521265 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.11s 2026-03-09 00:54:43.521275 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.10s 2026-03-09 00:54:43.521284 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.03s 2026-03-09 00:54:43.521294 | orchestrator | 2026-03-09 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:46.569549 | orchestrator | 2026-03-09 00:54:46 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:46.571015 | orchestrator | 2026-03-09 00:54:46 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:46.573305 | orchestrator | 2026-03-09 00:54:46 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:46.573362 | orchestrator | 2026-03-09 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:49.612404 | orchestrator | 2026-03-09 00:54:49 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:49.614878 | orchestrator | 2026-03-09 00:54:49 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:49.618311 | orchestrator | 2026-03-09 00:54:49 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:49.618756 | orchestrator | 2026-03-09 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:52.666444 | orchestrator | 2026-03-09 00:54:52 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:52.667909 | orchestrator | 2026-03-09 00:54:52 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:52.669531 | orchestrator | 2026-03-09 00:54:52 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:52.669579 | orchestrator | 2026-03-09 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:55.714765 | orchestrator | 2026-03-09 00:54:55 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:55.716429 | orchestrator | 2026-03-09 00:54:55 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:55.719219 | orchestrator | 2026-03-09 00:54:55 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:55.719262 | orchestrator | 2026-03-09 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:58.763982 | orchestrator | 2026-03-09 00:54:58 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:54:58.765864 | orchestrator | 2026-03-09 00:54:58 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:54:58.768500 | orchestrator | 2026-03-09 00:54:58 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:54:58.768560 | orchestrator | 2026-03-09 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:01.822129 | orchestrator | 2026-03-09 00:55:01 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:01.823573 | orchestrator | 2026-03-09 00:55:01 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:01.825905 | orchestrator | 2026-03-09 00:55:01 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:55:01.825953 | orchestrator | 2026-03-09 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:04.866575 | orchestrator | 2026-03-09 00:55:04 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:04.866670 | orchestrator | 2026-03-09 00:55:04 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:04.866968 | orchestrator | 2026-03-09 00:55:04 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:55:04.866995 | orchestrator | 2026-03-09 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:07.906516 | orchestrator | 2026-03-09 00:55:07 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:07.907216 | orchestrator | 2026-03-09 00:55:07 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:07.908508 | orchestrator | 2026-03-09 00:55:07 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:55:07.908547 | orchestrator | 2026-03-09 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:10.999424 | orchestrator | 2026-03-09 00:55:10 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:11.001071 | orchestrator | 2026-03-09 00:55:10 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:11.003299 | orchestrator | 2026-03-09 00:55:11 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:55:11.003367 | orchestrator | 2026-03-09 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:14.051497 | orchestrator | 2026-03-09 00:55:14 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:14.053240 | orchestrator | 2026-03-09 00:55:14 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:14.054521 | orchestrator | 2026-03-09 00:55:14 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:55:14.054566 | orchestrator | 2026-03-09 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:17.095772 | orchestrator | 2026-03-09 00:55:17 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:17.097089 | orchestrator | 2026-03-09 00:55:17 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:17.099155 | orchestrator | 2026-03-09 00:55:17 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:55:17.099188 | orchestrator | 2026-03-09 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:20.141397 | orchestrator | 2026-03-09 00:55:20 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:20.142449 | orchestrator | 2026-03-09 00:55:20 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:20.142679 | orchestrator | 2026-03-09 00:55:20 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:55:20.142728 | orchestrator | 2026-03-09 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:23.189147 | orchestrator | 2026-03-09 00:55:23 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:23.190160 | orchestrator | 2026-03-09 00:55:23 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:23.191202 | orchestrator | 2026-03-09 00:55:23 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:55:23.191226 | orchestrator | 2026-03-09 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:26.240579 | orchestrator | 2026-03-09 00:55:26 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:26.241411 | orchestrator | 2026-03-09 00:55:26 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:26.243231 | orchestrator | 2026-03-09 00:55:26 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:55:26.243280 | orchestrator | 2026-03-09 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:29.293877 | orchestrator | 2026-03-09 00:55:29 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:29.293986 | orchestrator | 2026-03-09 00:55:29 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:29.296595 | orchestrator | 2026-03-09 00:55:29 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:55:29.296663 | orchestrator | 2026-03-09 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:32.335549 | orchestrator | 2026-03-09 00:55:32 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:32.337142 | orchestrator | 2026-03-09 00:55:32 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:32.338595 | orchestrator | 2026-03-09 00:55:32 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:55:32.338663 | orchestrator | 2026-03-09 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:35.377555 | orchestrator | 2026-03-09 00:55:35 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:35.381101 | orchestrator | 2026-03-09 00:55:35 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:35.382992 | orchestrator | 2026-03-09 00:55:35 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:55:35.383071 | orchestrator | 2026-03-09 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:38.433703 | orchestrator | 2026-03-09 00:55:38 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:38.435452 | orchestrator | 2026-03-09 00:55:38 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:38.437390 | orchestrator | 2026-03-09 00:55:38 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:55:38.437446 | orchestrator | 2026-03-09 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:41.476525 | orchestrator | 2026-03-09 00:55:41 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:41.476842 | orchestrator | 2026-03-09 00:55:41 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:41.477929 | orchestrator | 2026-03-09 00:55:41 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state STARTED 2026-03-09 00:55:41.478091 | orchestrator | 2026-03-09 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:44.516623 | orchestrator | 2026-03-09 00:55:44 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:44.519757 | orchestrator | 2026-03-09 00:55:44 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:44.525797 | orchestrator | 2026-03-09 00:55:44 | INFO  | Task 220e9859-b094-4c6d-aa2a-3ee4f04bd493 is in state SUCCESS 2026-03-09 00:55:44.526211 | orchestrator | 2026-03-09 00:55:44.528616 | orchestrator | 2026-03-09 00:55:44.528736 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:55:44.528765 | orchestrator | 2026-03-09 00:55:44.528825 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:55:44.528843 | orchestrator | Monday 09 March 2026 00:53:06 +0000 (0:00:00.176) 0:00:00.176 ********** 2026-03-09 00:55:44.528859 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:55:44.528874 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:55:44.528888 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:55:44.528903 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.528918 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.528935 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.528949 | orchestrator | 2026-03-09 00:55:44.528984 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:55:44.529042 | orchestrator | Monday 09 March 2026 00:53:07 +0000 (0:00:00.782) 0:00:00.959 ********** 2026-03-09 00:55:44.529060 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-09 00:55:44.529076 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-09 00:55:44.529093 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-09 00:55:44.529110 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-09 00:55:44.529127 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-09 00:55:44.529137 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-09 00:55:44.529149 | orchestrator | 2026-03-09 00:55:44.529163 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-09 00:55:44.529180 | orchestrator | 2026-03-09 00:55:44.529197 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-09 00:55:44.529213 | orchestrator | Monday 09 March 2026 00:53:07 +0000 (0:00:00.899) 0:00:01.858 ********** 2026-03-09 00:55:44.529264 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:44.529285 | orchestrator | 2026-03-09 00:55:44.529303 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-09 00:55:44.529320 | orchestrator | Monday 09 March 2026 00:53:09 +0000 (0:00:01.485) 0:00:03.343 ********** 2026-03-09 00:55:44.529335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529363 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529428 | orchestrator | 2026-03-09 00:55:44.529440 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-09 00:55:44.529452 | orchestrator | Monday 09 March 2026 00:53:11 +0000 (0:00:01.918) 0:00:05.262 ********** 2026-03-09 00:55:44.529477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529602 | orchestrator | 2026-03-09 00:55:44.529618 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-09 00:55:44.529633 | orchestrator | Monday 09 March 2026 00:53:13 +0000 (0:00:02.448) 0:00:07.710 ********** 2026-03-09 00:55:44.529650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529667 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529785 | orchestrator | 2026-03-09 00:55:44.529801 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-09 00:55:44.529818 | orchestrator | Monday 09 March 2026 00:53:15 +0000 (0:00:01.609) 0:00:09.320 ********** 2026-03-09 00:55:44.529835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529949 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.529981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.530113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.530131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.530176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.530208 | orchestrator | 2026-03-09 00:55:44.530233 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-09 00:55:44.530249 | orchestrator | Monday 09 March 2026 00:53:17 +0000 (0:00:01.988) 0:00:11.308 ********** 2026-03-09 00:55:44.530267 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.530283 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.530300 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.530311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.530321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.530332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.530341 | orchestrator | 2026-03-09 00:55:44.530351 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-09 00:55:44.530361 | orchestrator | Monday 09 March 2026 00:53:19 +0000 (0:00:01.820) 0:00:13.129 ********** 2026-03-09 00:55:44.530371 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:55:44.530382 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:55:44.530391 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:44.530401 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:55:44.530411 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:44.530421 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:44.530433 | orchestrator | 2026-03-09 00:55:44.530456 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-09 00:55:44.530473 | orchestrator | Monday 09 March 2026 00:53:21 +0000 (0:00:02.710) 0:00:15.839 ********** 2026-03-09 00:55:44.530490 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-09 00:55:44.530507 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-09 00:55:44.530519 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-09 00:55:44.530535 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-09 00:55:44.530545 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-09 00:55:44.530555 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-09 00:55:44.530565 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:55:44.530574 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:55:44.530589 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:55:44.530599 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:55:44.530609 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:55:44.530619 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:55:44.530628 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-09 00:55:44.530640 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-09 00:55:44.530650 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-09 00:55:44.530659 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-09 00:55:44.530669 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-09 00:55:44.530679 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-09 00:55:44.530689 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:55:44.530699 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:55:44.530709 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:55:44.530719 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:55:44.530729 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:55:44.530738 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:55:44.530748 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:55:44.530758 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:55:44.530767 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:55:44.530777 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:55:44.530793 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:55:44.530803 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:55:44.530813 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:55:44.530823 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:55:44.530833 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:55:44.530843 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:55:44.530852 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:55:44.530862 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:55:44.530871 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-09 00:55:44.530881 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-09 00:55:44.530891 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-09 00:55:44.530901 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-09 00:55:44.530917 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-09 00:55:44.530927 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-09 00:55:44.530937 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-09 00:55:44.530953 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-09 00:55:44.530974 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-09 00:55:44.531051 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-09 00:55:44.531069 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-09 00:55:44.531085 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-09 00:55:44.531102 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-09 00:55:44.531118 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-09 00:55:44.531134 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-09 00:55:44.531152 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-09 00:55:44.531169 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-09 00:55:44.531186 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-09 00:55:44.531203 | orchestrator | 2026-03-09 00:55:44.531219 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:55:44.531236 | orchestrator | Monday 09 March 2026 00:53:42 +0000 (0:00:20.799) 0:00:36.639 ********** 2026-03-09 00:55:44.531260 | orchestrator | 2026-03-09 00:55:44.531275 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:55:44.531288 | orchestrator | Monday 09 March 2026 00:53:42 +0000 (0:00:00.086) 0:00:36.726 ********** 2026-03-09 00:55:44.531301 | orchestrator | 2026-03-09 00:55:44.531315 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:55:44.531329 | orchestrator | Monday 09 March 2026 00:53:42 +0000 (0:00:00.077) 0:00:36.803 ********** 2026-03-09 00:55:44.531342 | orchestrator | 2026-03-09 00:55:44.531356 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:55:44.531366 | orchestrator | Monday 09 March 2026 00:53:43 +0000 (0:00:00.130) 0:00:36.933 ********** 2026-03-09 00:55:44.531374 | orchestrator | 2026-03-09 00:55:44.531382 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:55:44.531390 | orchestrator | Monday 09 March 2026 00:53:43 +0000 (0:00:00.278) 0:00:37.211 ********** 2026-03-09 00:55:44.531398 | orchestrator | 2026-03-09 00:55:44.531406 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:55:44.531414 | orchestrator | Monday 09 March 2026 00:53:43 +0000 (0:00:00.089) 0:00:37.301 ********** 2026-03-09 00:55:44.531422 | orchestrator | 2026-03-09 00:55:44.531430 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-09 00:55:44.531438 | orchestrator | Monday 09 March 2026 00:53:43 +0000 (0:00:00.116) 0:00:37.418 ********** 2026-03-09 00:55:44.531446 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.531454 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:55:44.531462 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:55:44.531470 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:55:44.531479 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.531486 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.531494 | orchestrator | 2026-03-09 00:55:44.531502 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-09 00:55:44.531510 | orchestrator | Monday 09 March 2026 00:53:45 +0000 (0:00:02.090) 0:00:39.508 ********** 2026-03-09 00:55:44.531518 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:44.531527 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:55:44.531535 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:55:44.531542 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:44.531550 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:44.531558 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:55:44.531566 | orchestrator | 2026-03-09 00:55:44.531574 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-09 00:55:44.531582 | orchestrator | 2026-03-09 00:55:44.531590 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-09 00:55:44.531598 | orchestrator | Monday 09 March 2026 00:54:13 +0000 (0:00:27.858) 0:01:07.367 ********** 2026-03-09 00:55:44.531606 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:44.531614 | orchestrator | 2026-03-09 00:55:44.531622 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-09 00:55:44.531630 | orchestrator | Monday 09 March 2026 00:54:14 +0000 (0:00:00.788) 0:01:08.155 ********** 2026-03-09 00:55:44.531638 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:44.531646 | orchestrator | 2026-03-09 00:55:44.531662 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-09 00:55:44.531670 | orchestrator | Monday 09 March 2026 00:54:14 +0000 (0:00:00.640) 0:01:08.796 ********** 2026-03-09 00:55:44.531678 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.531686 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.531693 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.531701 | orchestrator | 2026-03-09 00:55:44.531709 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-09 00:55:44.531717 | orchestrator | Monday 09 March 2026 00:54:16 +0000 (0:00:01.201) 0:01:09.998 ********** 2026-03-09 00:55:44.531732 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.531745 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.531753 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.531761 | orchestrator | 2026-03-09 00:55:44.531769 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-09 00:55:44.531777 | orchestrator | Monday 09 March 2026 00:54:16 +0000 (0:00:00.472) 0:01:10.470 ********** 2026-03-09 00:55:44.531785 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.531792 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.531800 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.531808 | orchestrator | 2026-03-09 00:55:44.531816 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-09 00:55:44.531823 | orchestrator | Monday 09 March 2026 00:54:17 +0000 (0:00:00.464) 0:01:10.934 ********** 2026-03-09 00:55:44.531831 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.531839 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.531847 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.531855 | orchestrator | 2026-03-09 00:55:44.531863 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-09 00:55:44.531871 | orchestrator | Monday 09 March 2026 00:54:17 +0000 (0:00:00.416) 0:01:11.351 ********** 2026-03-09 00:55:44.531879 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.531886 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.531894 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.531902 | orchestrator | 2026-03-09 00:55:44.531910 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-09 00:55:44.531918 | orchestrator | Monday 09 March 2026 00:54:19 +0000 (0:00:01.527) 0:01:12.878 ********** 2026-03-09 00:55:44.531926 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.531934 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.531941 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.531949 | orchestrator | 2026-03-09 00:55:44.531957 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-09 00:55:44.531965 | orchestrator | Monday 09 March 2026 00:54:19 +0000 (0:00:00.661) 0:01:13.539 ********** 2026-03-09 00:55:44.531973 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.531981 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532042 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532053 | orchestrator | 2026-03-09 00:55:44.532061 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-09 00:55:44.532069 | orchestrator | Monday 09 March 2026 00:54:20 +0000 (0:00:00.869) 0:01:14.408 ********** 2026-03-09 00:55:44.532077 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.532085 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532093 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532101 | orchestrator | 2026-03-09 00:55:44.532110 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-09 00:55:44.532118 | orchestrator | Monday 09 March 2026 00:54:21 +0000 (0:00:00.620) 0:01:15.028 ********** 2026-03-09 00:55:44.532125 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.532151 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532160 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532168 | orchestrator | 2026-03-09 00:55:44.532176 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-09 00:55:44.532184 | orchestrator | Monday 09 March 2026 00:54:22 +0000 (0:00:00.925) 0:01:15.954 ********** 2026-03-09 00:55:44.532192 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.532200 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532208 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532216 | orchestrator | 2026-03-09 00:55:44.532224 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-09 00:55:44.532232 | orchestrator | Monday 09 March 2026 00:54:22 +0000 (0:00:00.557) 0:01:16.512 ********** 2026-03-09 00:55:44.532240 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.532255 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532263 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532271 | orchestrator | 2026-03-09 00:55:44.532279 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-09 00:55:44.532287 | orchestrator | Monday 09 March 2026 00:54:23 +0000 (0:00:00.483) 0:01:16.996 ********** 2026-03-09 00:55:44.532295 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.532303 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532315 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532329 | orchestrator | 2026-03-09 00:55:44.532342 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-09 00:55:44.532355 | orchestrator | Monday 09 March 2026 00:54:23 +0000 (0:00:00.335) 0:01:17.331 ********** 2026-03-09 00:55:44.532368 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.532382 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532395 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532409 | orchestrator | 2026-03-09 00:55:44.532425 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-09 00:55:44.532438 | orchestrator | Monday 09 March 2026 00:54:24 +0000 (0:00:00.647) 0:01:17.978 ********** 2026-03-09 00:55:44.532451 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.532464 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532478 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532492 | orchestrator | 2026-03-09 00:55:44.532507 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-09 00:55:44.532520 | orchestrator | Monday 09 March 2026 00:54:24 +0000 (0:00:00.372) 0:01:18.351 ********** 2026-03-09 00:55:44.532534 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.532547 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532561 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532571 | orchestrator | 2026-03-09 00:55:44.532585 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-09 00:55:44.532592 | orchestrator | Monday 09 March 2026 00:54:24 +0000 (0:00:00.358) 0:01:18.710 ********** 2026-03-09 00:55:44.532599 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.532606 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532613 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532619 | orchestrator | 2026-03-09 00:55:44.532626 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-09 00:55:44.532633 | orchestrator | Monday 09 March 2026 00:54:25 +0000 (0:00:00.495) 0:01:19.205 ********** 2026-03-09 00:55:44.532640 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.532652 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532659 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532666 | orchestrator | 2026-03-09 00:55:44.532672 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-09 00:55:44.532679 | orchestrator | Monday 09 March 2026 00:54:25 +0000 (0:00:00.604) 0:01:19.810 ********** 2026-03-09 00:55:44.532686 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:44.532693 | orchestrator | 2026-03-09 00:55:44.532700 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-09 00:55:44.532706 | orchestrator | Monday 09 March 2026 00:54:27 +0000 (0:00:01.383) 0:01:21.194 ********** 2026-03-09 00:55:44.532713 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.532720 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.532727 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.532733 | orchestrator | 2026-03-09 00:55:44.532740 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-09 00:55:44.532747 | orchestrator | Monday 09 March 2026 00:54:27 +0000 (0:00:00.659) 0:01:21.853 ********** 2026-03-09 00:55:44.532753 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.532760 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.532767 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.532781 | orchestrator | 2026-03-09 00:55:44.532788 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-09 00:55:44.532795 | orchestrator | Monday 09 March 2026 00:54:28 +0000 (0:00:00.592) 0:01:22.446 ********** 2026-03-09 00:55:44.532801 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.532808 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532815 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532822 | orchestrator | 2026-03-09 00:55:44.532828 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-09 00:55:44.532835 | orchestrator | Monday 09 March 2026 00:54:29 +0000 (0:00:00.852) 0:01:23.298 ********** 2026-03-09 00:55:44.532842 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.532849 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532855 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532862 | orchestrator | 2026-03-09 00:55:44.532869 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-09 00:55:44.532876 | orchestrator | Monday 09 March 2026 00:54:29 +0000 (0:00:00.467) 0:01:23.766 ********** 2026-03-09 00:55:44.532883 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.532889 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532896 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532903 | orchestrator | 2026-03-09 00:55:44.532910 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-09 00:55:44.532916 | orchestrator | Monday 09 March 2026 00:54:30 +0000 (0:00:00.550) 0:01:24.317 ********** 2026-03-09 00:55:44.532923 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.532930 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532937 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532944 | orchestrator | 2026-03-09 00:55:44.532951 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-09 00:55:44.532958 | orchestrator | Monday 09 March 2026 00:54:30 +0000 (0:00:00.522) 0:01:24.839 ********** 2026-03-09 00:55:44.532965 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.532972 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.532979 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.532985 | orchestrator | 2026-03-09 00:55:44.533016 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-09 00:55:44.533026 | orchestrator | Monday 09 March 2026 00:54:31 +0000 (0:00:00.911) 0:01:25.750 ********** 2026-03-09 00:55:44.533036 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.533047 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.533059 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.533069 | orchestrator | 2026-03-09 00:55:44.533080 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-09 00:55:44.533087 | orchestrator | Monday 09 March 2026 00:54:32 +0000 (0:00:00.402) 0:01:26.152 ********** 2026-03-09 00:55:44.533095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533183 | orchestrator | 2026-03-09 00:55:44.533190 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-09 00:55:44.533197 | orchestrator | Monday 09 March 2026 00:54:33 +0000 (0:00:01.662) 0:01:27.815 ********** 2026-03-09 00:55:44.533204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533282 | orchestrator | 2026-03-09 00:55:44.533289 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-09 00:55:44.533296 | orchestrator | Monday 09 March 2026 00:54:38 +0000 (0:00:04.138) 0:01:31.953 ********** 2026-03-09 00:55:44.533303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533383 | orchestrator | 2026-03-09 00:55:44.533390 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:55:44.533397 | orchestrator | Monday 09 March 2026 00:54:40 +0000 (0:00:02.747) 0:01:34.700 ********** 2026-03-09 00:55:44.533404 | orchestrator | 2026-03-09 00:55:44.533411 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:55:44.533418 | orchestrator | Monday 09 March 2026 00:54:40 +0000 (0:00:00.075) 0:01:34.775 ********** 2026-03-09 00:55:44.533424 | orchestrator | 2026-03-09 00:55:44.533431 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:55:44.533438 | orchestrator | Monday 09 March 2026 00:54:40 +0000 (0:00:00.082) 0:01:34.858 ********** 2026-03-09 00:55:44.533445 | orchestrator | 2026-03-09 00:55:44.533452 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-09 00:55:44.533458 | orchestrator | Monday 09 March 2026 00:54:41 +0000 (0:00:00.080) 0:01:34.939 ********** 2026-03-09 00:55:44.533465 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:44.533478 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:44.533485 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:44.533492 | orchestrator | 2026-03-09 00:55:44.533499 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-09 00:55:44.533506 | orchestrator | Monday 09 March 2026 00:54:47 +0000 (0:00:06.713) 0:01:41.652 ********** 2026-03-09 00:55:44.533513 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:44.533519 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:44.533526 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:44.533533 | orchestrator | 2026-03-09 00:55:44.533540 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-09 00:55:44.533546 | orchestrator | Monday 09 March 2026 00:54:55 +0000 (0:00:07.937) 0:01:49.590 ********** 2026-03-09 00:55:44.533553 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:44.533560 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:44.533566 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:44.533573 | orchestrator | 2026-03-09 00:55:44.533580 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-09 00:55:44.533587 | orchestrator | Monday 09 March 2026 00:55:03 +0000 (0:00:08.184) 0:01:57.775 ********** 2026-03-09 00:55:44.533593 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.533600 | orchestrator | 2026-03-09 00:55:44.533607 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-09 00:55:44.533613 | orchestrator | Monday 09 March 2026 00:55:04 +0000 (0:00:00.208) 0:01:57.983 ********** 2026-03-09 00:55:44.533620 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.533627 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.533634 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.533640 | orchestrator | 2026-03-09 00:55:44.533651 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-09 00:55:44.533658 | orchestrator | Monday 09 March 2026 00:55:04 +0000 (0:00:00.840) 0:01:58.824 ********** 2026-03-09 00:55:44.533665 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.533672 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.533678 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:44.533685 | orchestrator | 2026-03-09 00:55:44.533692 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-09 00:55:44.533699 | orchestrator | Monday 09 March 2026 00:55:05 +0000 (0:00:00.746) 0:01:59.571 ********** 2026-03-09 00:55:44.533706 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.533712 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.533723 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.533730 | orchestrator | 2026-03-09 00:55:44.533737 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-09 00:55:44.533744 | orchestrator | Monday 09 March 2026 00:55:06 +0000 (0:00:00.967) 0:02:00.539 ********** 2026-03-09 00:55:44.533751 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.533757 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.533764 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:44.533771 | orchestrator | 2026-03-09 00:55:44.533778 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-09 00:55:44.533784 | orchestrator | Monday 09 March 2026 00:55:07 +0000 (0:00:00.966) 0:02:01.505 ********** 2026-03-09 00:55:44.533791 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.533798 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.533805 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.533811 | orchestrator | 2026-03-09 00:55:44.533818 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-09 00:55:44.533825 | orchestrator | Monday 09 March 2026 00:55:08 +0000 (0:00:01.004) 0:02:02.509 ********** 2026-03-09 00:55:44.533832 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.533838 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.533845 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.533852 | orchestrator | 2026-03-09 00:55:44.533858 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-09 00:55:44.533871 | orchestrator | Monday 09 March 2026 00:55:09 +0000 (0:00:01.001) 0:02:03.511 ********** 2026-03-09 00:55:44.533878 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.533884 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.533891 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.533898 | orchestrator | 2026-03-09 00:55:44.533904 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-09 00:55:44.533911 | orchestrator | Monday 09 March 2026 00:55:09 +0000 (0:00:00.323) 0:02:03.834 ********** 2026-03-09 00:55:44.533918 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533925 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533933 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533940 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533947 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533954 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533964 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533976 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.533983 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534011 | orchestrator | 2026-03-09 00:55:44.534048 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-09 00:55:44.534055 | orchestrator | Monday 09 March 2026 00:55:11 +0000 (0:00:01.466) 0:02:05.301 ********** 2026-03-09 00:55:44.534062 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534069 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534076 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534083 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534110 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534141 | orchestrator | 2026-03-09 00:55:44.534148 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-09 00:55:44.534155 | orchestrator | Monday 09 March 2026 00:55:15 +0000 (0:00:04.281) 0:02:09.583 ********** 2026-03-09 00:55:44.534162 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534169 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534176 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534183 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534216 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:55:44.534239 | orchestrator | 2026-03-09 00:55:44.534246 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:55:44.534253 | orchestrator | Monday 09 March 2026 00:55:18 +0000 (0:00:03.114) 0:02:12.697 ********** 2026-03-09 00:55:44.534260 | orchestrator | 2026-03-09 00:55:44.534266 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:55:44.534273 | orchestrator | Monday 09 March 2026 00:55:18 +0000 (0:00:00.072) 0:02:12.770 ********** 2026-03-09 00:55:44.534280 | orchestrator | 2026-03-09 00:55:44.534287 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:55:44.534293 | orchestrator | Monday 09 March 2026 00:55:18 +0000 (0:00:00.064) 0:02:12.835 ********** 2026-03-09 00:55:44.534300 | orchestrator | 2026-03-09 00:55:44.534307 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-09 00:55:44.534314 | orchestrator | Monday 09 March 2026 00:55:19 +0000 (0:00:00.074) 0:02:12.910 ********** 2026-03-09 00:55:44.534321 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:44.534328 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:44.534334 | orchestrator | 2026-03-09 00:55:44.534341 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-09 00:55:44.534348 | orchestrator | Monday 09 March 2026 00:55:25 +0000 (0:00:06.402) 0:02:19.312 ********** 2026-03-09 00:55:44.534355 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:44.534361 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:44.534368 | orchestrator | 2026-03-09 00:55:44.534375 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-09 00:55:44.534382 | orchestrator | Monday 09 March 2026 00:55:32 +0000 (0:00:06.619) 0:02:25.931 ********** 2026-03-09 00:55:44.534389 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:44.534396 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:44.534402 | orchestrator | 2026-03-09 00:55:44.534409 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-09 00:55:44.534416 | orchestrator | Monday 09 March 2026 00:55:38 +0000 (0:00:06.500) 0:02:32.432 ********** 2026-03-09 00:55:44.534423 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:44.534429 | orchestrator | 2026-03-09 00:55:44.534436 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-09 00:55:44.534443 | orchestrator | Monday 09 March 2026 00:55:38 +0000 (0:00:00.162) 0:02:32.594 ********** 2026-03-09 00:55:44.534449 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.534456 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.534463 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.534470 | orchestrator | 2026-03-09 00:55:44.534476 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-09 00:55:44.534483 | orchestrator | Monday 09 March 2026 00:55:39 +0000 (0:00:00.789) 0:02:33.383 ********** 2026-03-09 00:55:44.534490 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.534497 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.534504 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:44.534510 | orchestrator | 2026-03-09 00:55:44.534517 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-09 00:55:44.534524 | orchestrator | Monday 09 March 2026 00:55:40 +0000 (0:00:00.612) 0:02:33.996 ********** 2026-03-09 00:55:44.534530 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.534537 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.534544 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.534550 | orchestrator | 2026-03-09 00:55:44.534557 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-09 00:55:44.534570 | orchestrator | Monday 09 March 2026 00:55:40 +0000 (0:00:00.826) 0:02:34.823 ********** 2026-03-09 00:55:44.534578 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:44.534584 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:44.534591 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:44.534598 | orchestrator | 2026-03-09 00:55:44.534605 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-09 00:55:44.534612 | orchestrator | Monday 09 March 2026 00:55:41 +0000 (0:00:00.789) 0:02:35.612 ********** 2026-03-09 00:55:44.534619 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.534626 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.534632 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.534639 | orchestrator | 2026-03-09 00:55:44.534646 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-09 00:55:44.534652 | orchestrator | Monday 09 March 2026 00:55:42 +0000 (0:00:00.771) 0:02:36.384 ********** 2026-03-09 00:55:44.534659 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:44.534666 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:44.534672 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:44.534679 | orchestrator | 2026-03-09 00:55:44.534686 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:55:44.534693 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-09 00:55:44.534700 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-09 00:55:44.534711 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-09 00:55:44.534719 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:55:44.534726 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:55:44.534738 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:55:44.534745 | orchestrator | 2026-03-09 00:55:44.534752 | orchestrator | 2026-03-09 00:55:44.534759 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:55:44.534766 | orchestrator | Monday 09 March 2026 00:55:43 +0000 (0:00:01.046) 0:02:37.431 ********** 2026-03-09 00:55:44.534773 | orchestrator | =============================================================================== 2026-03-09 00:55:44.534780 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 27.86s 2026-03-09 00:55:44.534786 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.80s 2026-03-09 00:55:44.534793 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.69s 2026-03-09 00:55:44.534800 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.56s 2026-03-09 00:55:44.534807 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.12s 2026-03-09 00:55:44.534813 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.28s 2026-03-09 00:55:44.534820 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.14s 2026-03-09 00:55:44.534827 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.11s 2026-03-09 00:55:44.534833 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.75s 2026-03-09 00:55:44.534840 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.71s 2026-03-09 00:55:44.534847 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.45s 2026-03-09 00:55:44.534858 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.09s 2026-03-09 00:55:44.534865 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.99s 2026-03-09 00:55:44.534872 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.92s 2026-03-09 00:55:44.534879 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.82s 2026-03-09 00:55:44.534885 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.66s 2026-03-09 00:55:44.534892 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.61s 2026-03-09 00:55:44.534899 | orchestrator | ovn-db : Establish whether the OVN SB cluster has already existed ------- 1.53s 2026-03-09 00:55:44.534905 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.48s 2026-03-09 00:55:44.534912 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.47s 2026-03-09 00:55:44.534919 | orchestrator | 2026-03-09 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:47.582829 | orchestrator | 2026-03-09 00:55:47 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:47.583429 | orchestrator | 2026-03-09 00:55:47 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:47.583608 | orchestrator | 2026-03-09 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:50.613836 | orchestrator | 2026-03-09 00:55:50 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:50.615172 | orchestrator | 2026-03-09 00:55:50 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:50.615221 | orchestrator | 2026-03-09 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:53.664514 | orchestrator | 2026-03-09 00:55:53 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:53.666860 | orchestrator | 2026-03-09 00:55:53 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:53.666935 | orchestrator | 2026-03-09 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:56.704052 | orchestrator | 2026-03-09 00:55:56 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:56.705451 | orchestrator | 2026-03-09 00:55:56 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:56.705500 | orchestrator | 2026-03-09 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:59.766670 | orchestrator | 2026-03-09 00:55:59 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:55:59.771199 | orchestrator | 2026-03-09 00:55:59 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:55:59.771301 | orchestrator | 2026-03-09 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:02.818943 | orchestrator | 2026-03-09 00:56:02 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:02.819929 | orchestrator | 2026-03-09 00:56:02 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:02.820583 | orchestrator | 2026-03-09 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:05.864737 | orchestrator | 2026-03-09 00:56:05 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:05.865178 | orchestrator | 2026-03-09 00:56:05 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:05.865207 | orchestrator | 2026-03-09 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:08.909379 | orchestrator | 2026-03-09 00:56:08 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:08.912408 | orchestrator | 2026-03-09 00:56:08 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:08.913266 | orchestrator | 2026-03-09 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:11.952448 | orchestrator | 2026-03-09 00:56:11 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:11.954349 | orchestrator | 2026-03-09 00:56:11 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:11.955439 | orchestrator | 2026-03-09 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:15.007014 | orchestrator | 2026-03-09 00:56:15 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:15.008549 | orchestrator | 2026-03-09 00:56:15 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:15.008599 | orchestrator | 2026-03-09 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:18.059997 | orchestrator | 2026-03-09 00:56:18 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:18.061737 | orchestrator | 2026-03-09 00:56:18 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:18.061781 | orchestrator | 2026-03-09 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:21.107281 | orchestrator | 2026-03-09 00:56:21 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:21.109293 | orchestrator | 2026-03-09 00:56:21 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:21.109333 | orchestrator | 2026-03-09 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:24.157554 | orchestrator | 2026-03-09 00:56:24 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:24.159340 | orchestrator | 2026-03-09 00:56:24 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:24.159406 | orchestrator | 2026-03-09 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:27.205869 | orchestrator | 2026-03-09 00:56:27 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:27.207169 | orchestrator | 2026-03-09 00:56:27 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:27.207198 | orchestrator | 2026-03-09 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:30.248052 | orchestrator | 2026-03-09 00:56:30 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:30.250254 | orchestrator | 2026-03-09 00:56:30 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:30.250326 | orchestrator | 2026-03-09 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:33.303833 | orchestrator | 2026-03-09 00:56:33 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:33.305332 | orchestrator | 2026-03-09 00:56:33 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:33.305379 | orchestrator | 2026-03-09 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:36.350090 | orchestrator | 2026-03-09 00:56:36 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:36.350176 | orchestrator | 2026-03-09 00:56:36 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:36.350188 | orchestrator | 2026-03-09 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:39.393043 | orchestrator | 2026-03-09 00:56:39 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:39.393215 | orchestrator | 2026-03-09 00:56:39 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:39.393785 | orchestrator | 2026-03-09 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:42.421272 | orchestrator | 2026-03-09 00:56:42 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:42.421618 | orchestrator | 2026-03-09 00:56:42 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:42.424136 | orchestrator | 2026-03-09 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:45.450758 | orchestrator | 2026-03-09 00:56:45 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:45.452038 | orchestrator | 2026-03-09 00:56:45 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:45.452084 | orchestrator | 2026-03-09 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:48.498644 | orchestrator | 2026-03-09 00:56:48 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:48.498776 | orchestrator | 2026-03-09 00:56:48 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:48.498804 | orchestrator | 2026-03-09 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:51.540848 | orchestrator | 2026-03-09 00:56:51 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:51.544316 | orchestrator | 2026-03-09 00:56:51 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:51.544376 | orchestrator | 2026-03-09 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:54.587244 | orchestrator | 2026-03-09 00:56:54 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:54.589438 | orchestrator | 2026-03-09 00:56:54 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:54.589526 | orchestrator | 2026-03-09 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:57.630455 | orchestrator | 2026-03-09 00:56:57 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:56:57.630806 | orchestrator | 2026-03-09 00:56:57 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:56:57.630822 | orchestrator | 2026-03-09 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:00.677625 | orchestrator | 2026-03-09 00:57:00 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:00.680200 | orchestrator | 2026-03-09 00:57:00 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:00.680957 | orchestrator | 2026-03-09 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:03.732823 | orchestrator | 2026-03-09 00:57:03 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:03.734070 | orchestrator | 2026-03-09 00:57:03 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:03.734163 | orchestrator | 2026-03-09 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:06.784066 | orchestrator | 2026-03-09 00:57:06 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:06.785767 | orchestrator | 2026-03-09 00:57:06 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:06.785841 | orchestrator | 2026-03-09 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:09.829952 | orchestrator | 2026-03-09 00:57:09 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:09.833200 | orchestrator | 2026-03-09 00:57:09 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:09.833246 | orchestrator | 2026-03-09 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:12.893742 | orchestrator | 2026-03-09 00:57:12 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:12.895496 | orchestrator | 2026-03-09 00:57:12 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:12.895552 | orchestrator | 2026-03-09 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:15.936155 | orchestrator | 2026-03-09 00:57:15 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:15.938473 | orchestrator | 2026-03-09 00:57:15 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:15.938548 | orchestrator | 2026-03-09 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:18.986981 | orchestrator | 2026-03-09 00:57:18 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:18.989535 | orchestrator | 2026-03-09 00:57:18 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:18.989711 | orchestrator | 2026-03-09 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:22.043946 | orchestrator | 2026-03-09 00:57:22 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:22.044386 | orchestrator | 2026-03-09 00:57:22 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:22.044723 | orchestrator | 2026-03-09 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:25.093123 | orchestrator | 2026-03-09 00:57:25 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:25.095592 | orchestrator | 2026-03-09 00:57:25 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:25.095812 | orchestrator | 2026-03-09 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:28.126776 | orchestrator | 2026-03-09 00:57:28 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:28.130534 | orchestrator | 2026-03-09 00:57:28 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:28.130612 | orchestrator | 2026-03-09 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:31.165445 | orchestrator | 2026-03-09 00:57:31 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:31.166723 | orchestrator | 2026-03-09 00:57:31 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:31.166772 | orchestrator | 2026-03-09 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:34.211789 | orchestrator | 2026-03-09 00:57:34 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:34.213081 | orchestrator | 2026-03-09 00:57:34 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:34.213130 | orchestrator | 2026-03-09 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:37.253506 | orchestrator | 2026-03-09 00:57:37 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:37.253819 | orchestrator | 2026-03-09 00:57:37 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:37.253902 | orchestrator | 2026-03-09 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:40.305687 | orchestrator | 2026-03-09 00:57:40 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:40.307642 | orchestrator | 2026-03-09 00:57:40 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:40.307720 | orchestrator | 2026-03-09 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:43.350615 | orchestrator | 2026-03-09 00:57:43 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:43.353543 | orchestrator | 2026-03-09 00:57:43 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:43.353765 | orchestrator | 2026-03-09 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:46.396412 | orchestrator | 2026-03-09 00:57:46 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:46.397007 | orchestrator | 2026-03-09 00:57:46 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:46.397074 | orchestrator | 2026-03-09 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:49.431809 | orchestrator | 2026-03-09 00:57:49 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:49.432711 | orchestrator | 2026-03-09 00:57:49 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:49.432758 | orchestrator | 2026-03-09 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:52.473660 | orchestrator | 2026-03-09 00:57:52 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:52.474751 | orchestrator | 2026-03-09 00:57:52 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:52.474794 | orchestrator | 2026-03-09 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:55.532122 | orchestrator | 2026-03-09 00:57:55 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:55.532935 | orchestrator | 2026-03-09 00:57:55 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:55.533040 | orchestrator | 2026-03-09 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:58.583948 | orchestrator | 2026-03-09 00:57:58 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:57:58.585286 | orchestrator | 2026-03-09 00:57:58 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:57:58.585645 | orchestrator | 2026-03-09 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:01.626206 | orchestrator | 2026-03-09 00:58:01 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:01.627096 | orchestrator | 2026-03-09 00:58:01 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:01.627156 | orchestrator | 2026-03-09 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:04.679263 | orchestrator | 2026-03-09 00:58:04 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:04.680366 | orchestrator | 2026-03-09 00:58:04 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:04.680404 | orchestrator | 2026-03-09 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:07.741479 | orchestrator | 2026-03-09 00:58:07 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:07.741768 | orchestrator | 2026-03-09 00:58:07 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:07.741807 | orchestrator | 2026-03-09 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:10.787281 | orchestrator | 2026-03-09 00:58:10 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:10.789232 | orchestrator | 2026-03-09 00:58:10 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:10.789322 | orchestrator | 2026-03-09 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:13.827409 | orchestrator | 2026-03-09 00:58:13 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:13.828892 | orchestrator | 2026-03-09 00:58:13 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:13.828920 | orchestrator | 2026-03-09 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:16.872064 | orchestrator | 2026-03-09 00:58:16 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:16.878330 | orchestrator | 2026-03-09 00:58:16 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:16.878430 | orchestrator | 2026-03-09 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:19.927185 | orchestrator | 2026-03-09 00:58:19 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:19.928941 | orchestrator | 2026-03-09 00:58:19 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:19.929026 | orchestrator | 2026-03-09 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:22.978397 | orchestrator | 2026-03-09 00:58:22 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:22.979961 | orchestrator | 2026-03-09 00:58:22 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:22.980084 | orchestrator | 2026-03-09 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:26.019564 | orchestrator | 2026-03-09 00:58:26 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:26.022068 | orchestrator | 2026-03-09 00:58:26 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:26.022124 | orchestrator | 2026-03-09 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:29.061597 | orchestrator | 2026-03-09 00:58:29 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:29.062723 | orchestrator | 2026-03-09 00:58:29 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:29.062758 | orchestrator | 2026-03-09 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:32.097237 | orchestrator | 2026-03-09 00:58:32 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:32.098347 | orchestrator | 2026-03-09 00:58:32 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:32.099276 | orchestrator | 2026-03-09 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:35.134995 | orchestrator | 2026-03-09 00:58:35 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:35.135475 | orchestrator | 2026-03-09 00:58:35 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:35.135627 | orchestrator | 2026-03-09 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:38.174334 | orchestrator | 2026-03-09 00:58:38 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:38.174391 | orchestrator | 2026-03-09 00:58:38 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:38.174398 | orchestrator | 2026-03-09 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:41.206079 | orchestrator | 2026-03-09 00:58:41 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:41.208197 | orchestrator | 2026-03-09 00:58:41 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:41.208231 | orchestrator | 2026-03-09 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:44.250162 | orchestrator | 2026-03-09 00:58:44 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:44.254376 | orchestrator | 2026-03-09 00:58:44 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:44.254455 | orchestrator | 2026-03-09 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:47.297660 | orchestrator | 2026-03-09 00:58:47 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:47.298865 | orchestrator | 2026-03-09 00:58:47 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:47.298921 | orchestrator | 2026-03-09 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:50.344321 | orchestrator | 2026-03-09 00:58:50 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:50.344388 | orchestrator | 2026-03-09 00:58:50 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:50.344397 | orchestrator | 2026-03-09 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:53.381398 | orchestrator | 2026-03-09 00:58:53 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:53.383383 | orchestrator | 2026-03-09 00:58:53 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:53.383508 | orchestrator | 2026-03-09 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:56.434496 | orchestrator | 2026-03-09 00:58:56 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:56.435931 | orchestrator | 2026-03-09 00:58:56 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:56.436300 | orchestrator | 2026-03-09 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:59.488744 | orchestrator | 2026-03-09 00:58:59 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:58:59.490545 | orchestrator | 2026-03-09 00:58:59 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:58:59.490831 | orchestrator | 2026-03-09 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:02.543530 | orchestrator | 2026-03-09 00:59:02 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:59:02.544203 | orchestrator | 2026-03-09 00:59:02 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:02.544271 | orchestrator | 2026-03-09 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:05.590982 | orchestrator | 2026-03-09 00:59:05 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state STARTED 2026-03-09 00:59:05.591569 | orchestrator | 2026-03-09 00:59:05 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:05.591605 | orchestrator | 2026-03-09 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:08.645663 | orchestrator | 2026-03-09 00:59:08.645872 | orchestrator | 2026-03-09 00:59:08.645903 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:59:08.645925 | orchestrator | 2026-03-09 00:59:08.645944 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:59:08.645964 | orchestrator | Monday 09 March 2026 00:51:41 +0000 (0:00:00.551) 0:00:00.551 ********** 2026-03-09 00:59:08.645984 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.646004 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.646348 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.646380 | orchestrator | 2026-03-09 00:59:08.646399 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:59:08.646419 | orchestrator | Monday 09 March 2026 00:51:41 +0000 (0:00:00.576) 0:00:01.128 ********** 2026-03-09 00:59:08.646438 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-09 00:59:08.646457 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-09 00:59:08.646476 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-09 00:59:08.646494 | orchestrator | 2026-03-09 00:59:08.646507 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-09 00:59:08.646518 | orchestrator | 2026-03-09 00:59:08.646529 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-09 00:59:08.646540 | orchestrator | Monday 09 March 2026 00:51:42 +0000 (0:00:00.670) 0:00:01.798 ********** 2026-03-09 00:59:08.646552 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.646563 | orchestrator | 2026-03-09 00:59:08.646574 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-09 00:59:08.646585 | orchestrator | Monday 09 March 2026 00:51:43 +0000 (0:00:00.867) 0:00:02.666 ********** 2026-03-09 00:59:08.646596 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.646607 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.646618 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.646629 | orchestrator | 2026-03-09 00:59:08.646640 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-09 00:59:08.646651 | orchestrator | Monday 09 March 2026 00:51:44 +0000 (0:00:00.718) 0:00:03.385 ********** 2026-03-09 00:59:08.646662 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.646674 | orchestrator | 2026-03-09 00:59:08.646685 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-09 00:59:08.646697 | orchestrator | Monday 09 March 2026 00:51:45 +0000 (0:00:01.620) 0:00:05.006 ********** 2026-03-09 00:59:08.646708 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.646725 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.646744 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.646788 | orchestrator | 2026-03-09 00:59:08.646808 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-09 00:59:08.646859 | orchestrator | Monday 09 March 2026 00:51:46 +0000 (0:00:01.048) 0:00:06.054 ********** 2026-03-09 00:59:08.646878 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:59:08.646898 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:59:08.646918 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:59:08.646937 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:59:08.646956 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:59:08.647003 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:59:08.647022 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-09 00:59:08.647043 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-09 00:59:08.647061 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-09 00:59:08.647185 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-09 00:59:08.647205 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-09 00:59:08.647224 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-09 00:59:08.647352 | orchestrator | 2026-03-09 00:59:08.647374 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-09 00:59:08.647393 | orchestrator | Monday 09 March 2026 00:51:50 +0000 (0:00:03.832) 0:00:09.887 ********** 2026-03-09 00:59:08.647413 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-09 00:59:08.647432 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-09 00:59:08.647452 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-09 00:59:08.647471 | orchestrator | 2026-03-09 00:59:08.647491 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-09 00:59:08.647509 | orchestrator | Monday 09 March 2026 00:51:51 +0000 (0:00:01.098) 0:00:10.985 ********** 2026-03-09 00:59:08.647528 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-09 00:59:08.647549 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-09 00:59:08.647570 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-09 00:59:08.647590 | orchestrator | 2026-03-09 00:59:08.647609 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-09 00:59:08.647628 | orchestrator | Monday 09 March 2026 00:51:53 +0000 (0:00:02.028) 0:00:13.014 ********** 2026-03-09 00:59:08.647649 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-09 00:59:08.647669 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-09 00:59:08.647713 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.647734 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.647777 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-09 00:59:08.647797 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.647816 | orchestrator | 2026-03-09 00:59:08.647835 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-09 00:59:08.647880 | orchestrator | Monday 09 March 2026 00:51:55 +0000 (0:00:01.814) 0:00:14.828 ********** 2026-03-09 00:59:08.647913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:59:08.647943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:59:08.647968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:59:08.647978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:59:08.647989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:59:08.648107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:59:08.648127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:59:08.648139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:59:08.648149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:59:08.648168 | orchestrator | 2026-03-09 00:59:08.648178 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-09 00:59:08.648188 | orchestrator | Monday 09 March 2026 00:52:00 +0000 (0:00:04.759) 0:00:19.588 ********** 2026-03-09 00:59:08.648198 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.648208 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.648218 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.648227 | orchestrator | 2026-03-09 00:59:08.648237 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-09 00:59:08.648247 | orchestrator | Monday 09 March 2026 00:52:02 +0000 (0:00:02.477) 0:00:22.065 ********** 2026-03-09 00:59:08.648257 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-09 00:59:08.648267 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-09 00:59:08.648277 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-09 00:59:08.648286 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-09 00:59:08.648296 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-09 00:59:08.648305 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-09 00:59:08.648315 | orchestrator | 2026-03-09 00:59:08.648325 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-09 00:59:08.648335 | orchestrator | Monday 09 March 2026 00:52:06 +0000 (0:00:04.063) 0:00:26.129 ********** 2026-03-09 00:59:08.648344 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.648354 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.648364 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.648374 | orchestrator | 2026-03-09 00:59:08.648383 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-09 00:59:08.648393 | orchestrator | Monday 09 March 2026 00:52:09 +0000 (0:00:02.557) 0:00:28.686 ********** 2026-03-09 00:59:08.648404 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.648413 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.648423 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.648433 | orchestrator | 2026-03-09 00:59:08.648442 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-09 00:59:08.648452 | orchestrator | Monday 09 March 2026 00:52:11 +0000 (0:00:01.807) 0:00:30.493 ********** 2026-03-09 00:59:08.648463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.648487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.648507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.648519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0b3fc3b7b8df27c3ee9128ae0325500cc21a0ba7', '__omit_place_holder__0b3fc3b7b8df27c3ee9128ae0325500cc21a0ba7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:59:08.648529 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.648539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.648550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.648560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.648575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0b3fc3b7b8df27c3ee9128ae0325500cc21a0ba7', '__omit_place_holder__0b3fc3b7b8df27c3ee9128ae0325500cc21a0ba7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:59:08.648586 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.648601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.648617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.648627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.648638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0b3fc3b7b8df27c3ee9128ae0325500cc21a0ba7', '__omit_place_holder__0b3fc3b7b8df27c3ee9128ae0325500cc21a0ba7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:59:08.648648 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.648658 | orchestrator | 2026-03-09 00:59:08.648667 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-09 00:59:08.648677 | orchestrator | Monday 09 March 2026 00:52:13 +0000 (0:00:01.743) 0:00:32.237 ********** 2026-03-09 00:59:08.648688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:59:08.648703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:59:08.648725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:59:08.648736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:59:08.648746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.648785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0b3fc3b7b8df27c3ee9128ae0325500cc21a0ba7', '__omit_place_holder__0b3fc3b7b8df27c3ee9128ae0325500cc21a0ba7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:59:08.648802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:59:08.648813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.648841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0b3fc3b7b8df27c3ee9128ae0325500cc21a0ba7', '__omit_place_holder__0b3fc3b7b8df27c3ee9128ae0325500cc21a0ba7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:59:08.648852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:59:08.648863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.648873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__0b3fc3b7b8df27c3ee9128ae0325500cc21a0ba7', '__omit_place_holder__0b3fc3b7b8df27c3ee9128ae0325500cc21a0ba7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:59:08.648883 | orchestrator | 2026-03-09 00:59:08.648893 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-09 00:59:08.649051 | orchestrator | Monday 09 March 2026 00:52:16 +0000 (0:00:03.950) 0:00:36.187 ********** 2026-03-09 00:59:08.649062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:59:08.649073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:59:08.649105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:59:08.649121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:59:08.649132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:59:08.649142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:59:08.649152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:59:08.649163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:59:08.649173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:59:08.649190 | orchestrator | 2026-03-09 00:59:08.649200 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-09 00:59:08.649210 | orchestrator | Monday 09 March 2026 00:52:20 +0000 (0:00:03.559) 0:00:39.747 ********** 2026-03-09 00:59:08.649220 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-09 00:59:08.649237 | orchestrator | changed: [testbed-node-0] => (item=/ansible/r2026-03-09 00:59:08 | INFO  | Task e3d3e94f-d226-458e-9c7a-142a8e99a87c is in state SUCCESS 2026-03-09 00:59:08.649248 | orchestrator | oles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-09 00:59:08.649258 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-09 00:59:08.649268 | orchestrator | 2026-03-09 00:59:08.649282 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-09 00:59:08.649292 | orchestrator | Monday 09 March 2026 00:52:23 +0000 (0:00:03.289) 0:00:43.037 ********** 2026-03-09 00:59:08.649302 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-09 00:59:08.649312 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-09 00:59:08.649322 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-09 00:59:08.649332 | orchestrator | 2026-03-09 00:59:08.649342 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-09 00:59:08.649359 | orchestrator | Monday 09 March 2026 00:52:30 +0000 (0:00:06.694) 0:00:49.731 ********** 2026-03-09 00:59:08.649375 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.649393 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.649408 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.649424 | orchestrator | 2026-03-09 00:59:08.649441 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-09 00:59:08.649459 | orchestrator | Monday 09 March 2026 00:52:31 +0000 (0:00:01.139) 0:00:50.871 ********** 2026-03-09 00:59:08.649477 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-09 00:59:08.649494 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-09 00:59:08.649510 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-09 00:59:08.649520 | orchestrator | 2026-03-09 00:59:08.649530 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-09 00:59:08.649565 | orchestrator | Monday 09 March 2026 00:52:37 +0000 (0:00:06.124) 0:00:56.995 ********** 2026-03-09 00:59:08.649575 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-09 00:59:08.649585 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-09 00:59:08.649595 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-09 00:59:08.649605 | orchestrator | 2026-03-09 00:59:08.649614 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-09 00:59:08.649641 | orchestrator | Monday 09 March 2026 00:52:42 +0000 (0:00:04.810) 0:01:01.805 ********** 2026-03-09 00:59:08.649661 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-09 00:59:08.649671 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-09 00:59:08.649681 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-09 00:59:08.649690 | orchestrator | 2026-03-09 00:59:08.649803 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-09 00:59:08.649821 | orchestrator | Monday 09 March 2026 00:52:44 +0000 (0:00:02.214) 0:01:04.020 ********** 2026-03-09 00:59:08.649836 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-09 00:59:08.649851 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-09 00:59:08.649866 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-09 00:59:08.649880 | orchestrator | 2026-03-09 00:59:08.649895 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-09 00:59:08.649910 | orchestrator | Monday 09 March 2026 00:52:47 +0000 (0:00:02.853) 0:01:06.873 ********** 2026-03-09 00:59:08.649924 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.649952 | orchestrator | 2026-03-09 00:59:08.649967 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-09 00:59:08.649984 | orchestrator | Monday 09 March 2026 00:52:49 +0000 (0:00:01.381) 0:01:08.255 ********** 2026-03-09 00:59:08.650003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:59:08.650092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:59:08.650109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:59:08.650125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:59:08.650159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:59:08.650185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:59:08.650202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:59:08.650229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:59:08.650255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:59:08.650275 | orchestrator | 2026-03-09 00:59:08.650291 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-09 00:59:08.650308 | orchestrator | Monday 09 March 2026 00:52:52 +0000 (0:00:03.916) 0:01:12.171 ********** 2026-03-09 00:59:08.650327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.650345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.650374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.650393 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.650411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.650429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.650588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.650607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.650624 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.650641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.650706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.650725 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.650745 | orchestrator | 2026-03-09 00:59:08.650787 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-09 00:59:08.650802 | orchestrator | Monday 09 March 2026 00:52:53 +0000 (0:00:00.965) 0:01:13.137 ********** 2026-03-09 00:59:08.650817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.650832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.650856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.650871 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.650891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.650907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.650931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.650946 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.650961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.650993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.651011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.651029 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.651046 | orchestrator | 2026-03-09 00:59:08.651064 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-09 00:59:08.651081 | orchestrator | Monday 09 March 2026 00:52:54 +0000 (0:00:00.770) 0:01:13.907 ********** 2026-03-09 00:59:08.651117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.651151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.651210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.651231 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.651249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.651267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.651287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.651304 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.651378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.651412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.651441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.651458 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.651475 | orchestrator | 2026-03-09 00:59:08.651506 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-09 00:59:08.651525 | orchestrator | Monday 09 March 2026 00:52:55 +0000 (0:00:00.764) 0:01:14.672 ********** 2026-03-09 00:59:08.651541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.651559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.651577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.651603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.651632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.651650 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.651668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.651686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.651745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.651851 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.651871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.651888 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.651905 | orchestrator | 2026-03-09 00:59:08.651922 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-09 00:59:08.651938 | orchestrator | Monday 09 March 2026 00:52:56 +0000 (0:00:00.614) 0:01:15.286 ********** 2026-03-09 00:59:08.651957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.652008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.652020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.652030 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.652041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.652051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.652062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.652072 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.652082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.652106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.652122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.652132 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.652251 | orchestrator | 2026-03-09 00:59:08.652264 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-09 00:59:08.652274 | orchestrator | Monday 09 March 2026 00:52:56 +0000 (0:00:00.802) 0:01:16.089 ********** 2026-03-09 00:59:08.652285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.652296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.652306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.652316 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.652326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.652344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.652365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.652376 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.652386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.652429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.652475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.652486 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.652496 | orchestrator | 2026-03-09 00:59:08.652508 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-09 00:59:08.652520 | orchestrator | Monday 09 March 2026 00:52:57 +0000 (0:00:00.888) 0:01:16.978 ********** 2026-03-09 00:59:08.652533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.652551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.652572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.652589 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.652601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.652613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.652626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.652638 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.652650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.652680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.652693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.652705 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.652718 | orchestrator | 2026-03-09 00:59:08.652735 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-09 00:59:08.652748 | orchestrator | Monday 09 March 2026 00:52:58 +0000 (0:00:00.948) 0:01:17.926 ********** 2026-03-09 00:59:08.652798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.652817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.652936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.652947 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.652967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.652986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.652996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.653006 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.653085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:59:08.653098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:59:08.653109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:59:08.653119 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.653129 | orchestrator | 2026-03-09 00:59:08.653139 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-09 00:59:08.653149 | orchestrator | Monday 09 March 2026 00:52:59 +0000 (0:00:00.978) 0:01:18.904 ********** 2026-03-09 00:59:08.653159 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-09 00:59:08.653170 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-09 00:59:08.653187 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-09 00:59:08.653197 | orchestrator | 2026-03-09 00:59:08.653207 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-09 00:59:08.653217 | orchestrator | Monday 09 March 2026 00:53:02 +0000 (0:00:02.535) 0:01:21.439 ********** 2026-03-09 00:59:08.653227 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-09 00:59:08.653237 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-09 00:59:08.653247 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-09 00:59:08.653257 | orchestrator | 2026-03-09 00:59:08.653266 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-09 00:59:08.653277 | orchestrator | Monday 09 March 2026 00:53:03 +0000 (0:00:01.736) 0:01:23.176 ********** 2026-03-09 00:59:08.653287 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 00:59:08.653296 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 00:59:08.653306 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 00:59:08.653316 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 00:59:08.653326 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.653336 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 00:59:08.653346 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.653356 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 00:59:08.653366 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.653376 | orchestrator | 2026-03-09 00:59:08.653385 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-09 00:59:08.653395 | orchestrator | Monday 09 March 2026 00:53:05 +0000 (0:00:01.143) 0:01:24.320 ********** 2026-03-09 00:59:08.653413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:59:08.653441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:59:08.653474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:59:08.653491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:59:08.653503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:59:08.653513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:59:08.653524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:59:08.653546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:59:08.653557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:59:08.653567 | orchestrator | 2026-03-09 00:59:08.653578 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-09 00:59:08.653596 | orchestrator | Monday 09 March 2026 00:53:08 +0000 (0:00:03.091) 0:01:27.412 ********** 2026-03-09 00:59:08.653606 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.653616 | orchestrator | 2026-03-09 00:59:08.653626 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-09 00:59:08.653636 | orchestrator | Monday 09 March 2026 00:53:08 +0000 (0:00:00.706) 0:01:28.118 ********** 2026-03-09 00:59:08.653649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-09 00:59:08.653660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-09 00:59:08.653672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.653689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.653705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.653722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.653732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.653743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.653811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-09 00:59:08.653826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.653848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.653859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.653877 | orchestrator | 2026-03-09 00:59:08.653887 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-09 00:59:08.653897 | orchestrator | Monday 09 March 2026 00:53:14 +0000 (0:00:06.018) 0:01:34.137 ********** 2026-03-09 00:59:08.653907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-09 00:59:08.653918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.653928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.653938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.653949 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.653970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-09 00:59:08.653987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.653997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654073 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.654084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-09 00:59:08.654093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.654111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654135 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.654143 | orchestrator | 2026-03-09 00:59:08.654151 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-09 00:59:08.654160 | orchestrator | Monday 09 March 2026 00:53:16 +0000 (0:00:01.945) 0:01:36.082 ********** 2026-03-09 00:59:08.654168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-09 00:59:08.654178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-09 00:59:08.654188 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.654196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-09 00:59:08.654204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-09 00:59:08.654212 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.654220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-09 00:59:08.654228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-09 00:59:08.654237 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.654245 | orchestrator | 2026-03-09 00:59:08.654252 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-09 00:59:08.654261 | orchestrator | Monday 09 March 2026 00:53:18 +0000 (0:00:01.250) 0:01:37.332 ********** 2026-03-09 00:59:08.654268 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.654277 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.654285 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.654293 | orchestrator | 2026-03-09 00:59:08.654301 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-09 00:59:08.654309 | orchestrator | Monday 09 March 2026 00:53:19 +0000 (0:00:01.492) 0:01:38.825 ********** 2026-03-09 00:59:08.654317 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.654325 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.654332 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.654340 | orchestrator | 2026-03-09 00:59:08.654348 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-09 00:59:08.654356 | orchestrator | Monday 09 March 2026 00:53:22 +0000 (0:00:02.440) 0:01:41.266 ********** 2026-03-09 00:59:08.654365 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.654378 | orchestrator | 2026-03-09 00:59:08.654386 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-09 00:59:08.654394 | orchestrator | Monday 09 March 2026 00:53:24 +0000 (0:00:02.255) 0:01:43.521 ********** 2026-03-09 00:59:08.654415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.654425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.654452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.654492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654509 | orchestrator | 2026-03-09 00:59:08.654517 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-09 00:59:08.654525 | orchestrator | Monday 09 March 2026 00:53:27 +0000 (0:00:03.610) 0:01:47.132 ********** 2026-03-09 00:59:08.654534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.654547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.654583 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.654592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654608 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.654617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.654635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.654656 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.654664 | orchestrator | 2026-03-09 00:59:08.654672 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-09 00:59:08.654680 | orchestrator | Monday 09 March 2026 00:53:28 +0000 (0:00:00.679) 0:01:47.811 ********** 2026-03-09 00:59:08.654689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-09 00:59:08.654698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-09 00:59:08.654706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-09 00:59:08.654715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-09 00:59:08.654724 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.654732 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.654740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-09 00:59:08.654748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-09 00:59:08.654780 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.654789 | orchestrator | 2026-03-09 00:59:08.654826 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-09 00:59:08.654835 | orchestrator | Monday 09 March 2026 00:53:29 +0000 (0:00:01.082) 0:01:48.894 ********** 2026-03-09 00:59:08.654844 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.654860 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.654869 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.654900 | orchestrator | 2026-03-09 00:59:08.654909 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-09 00:59:08.654917 | orchestrator | Monday 09 March 2026 00:53:31 +0000 (0:00:01.530) 0:01:50.425 ********** 2026-03-09 00:59:08.654960 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.654970 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.654978 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.654986 | orchestrator | 2026-03-09 00:59:08.655027 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-09 00:59:08.655037 | orchestrator | Monday 09 March 2026 00:53:33 +0000 (0:00:02.151) 0:01:52.577 ********** 2026-03-09 00:59:08.655045 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.655053 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.655061 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.655069 | orchestrator | 2026-03-09 00:59:08.655077 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-09 00:59:08.655084 | orchestrator | Monday 09 March 2026 00:53:33 +0000 (0:00:00.338) 0:01:52.915 ********** 2026-03-09 00:59:08.655092 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.655100 | orchestrator | 2026-03-09 00:59:08.655108 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-09 00:59:08.655116 | orchestrator | Monday 09 March 2026 00:53:34 +0000 (0:00:01.000) 0:01:53.915 ********** 2026-03-09 00:59:08.655156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-09 00:59:08.655168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-09 00:59:08.655177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-09 00:59:08.655192 | orchestrator | 2026-03-09 00:59:08.655201 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-09 00:59:08.655209 | orchestrator | Monday 09 March 2026 00:53:38 +0000 (0:00:03.541) 0:01:57.457 ********** 2026-03-09 00:59:08.655217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-09 00:59:08.655226 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.655234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-09 00:59:08.655242 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.655260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-09 00:59:08.655269 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.655277 | orchestrator | 2026-03-09 00:59:08.655285 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-09 00:59:08.655293 | orchestrator | Monday 09 March 2026 00:53:40 +0000 (0:00:02.019) 0:01:59.476 ********** 2026-03-09 00:59:08.655303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:59:08.655318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:59:08.655328 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.655336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:59:08.655345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:59:08.655353 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.655361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:59:08.655370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:59:08.655378 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.655386 | orchestrator | 2026-03-09 00:59:08.655394 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-09 00:59:08.655403 | orchestrator | Monday 09 March 2026 00:53:42 +0000 (0:00:02.258) 0:02:01.735 ********** 2026-03-09 00:59:08.655411 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.655457 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.655478 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.655486 | orchestrator | 2026-03-09 00:59:08.655494 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-09 00:59:08.655502 | orchestrator | Monday 09 March 2026 00:53:43 +0000 (0:00:01.064) 0:02:02.799 ********** 2026-03-09 00:59:08.655510 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.655518 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.655531 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.655540 | orchestrator | 2026-03-09 00:59:08.655583 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-09 00:59:08.655617 | orchestrator | Monday 09 March 2026 00:53:45 +0000 (0:00:01.582) 0:02:04.382 ********** 2026-03-09 00:59:08.655625 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.655633 | orchestrator | 2026-03-09 00:59:08.655646 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-09 00:59:08.655660 | orchestrator | Monday 09 March 2026 00:53:46 +0000 (0:00:01.322) 0:02:05.704 ********** 2026-03-09 00:59:08.655669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.655678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.655687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.655696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.655733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.655749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.655776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.655786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.655794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.655825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.655844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.655859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.655867 | orchestrator | 2026-03-09 00:59:08.655876 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-09 00:59:08.655884 | orchestrator | Monday 09 March 2026 00:53:54 +0000 (0:00:07.862) 0:02:13.566 ********** 2026-03-09 00:59:08.655892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.655901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.655910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.655923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.655937 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.655949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.655958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.655966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.655975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.655983 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.655992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.656014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656040 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.656048 | orchestrator | 2026-03-09 00:59:08.656056 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-09 00:59:08.656065 | orchestrator | Monday 09 March 2026 00:53:56 +0000 (0:00:01.711) 0:02:15.278 ********** 2026-03-09 00:59:08.656073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-09 00:59:08.656082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-09 00:59:08.656090 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.656098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-09 00:59:08.656106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-09 00:59:08.656114 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.656122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-09 00:59:08.656136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-09 00:59:08.656145 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.656163 | orchestrator | 2026-03-09 00:59:08.656172 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-09 00:59:08.656180 | orchestrator | Monday 09 March 2026 00:53:57 +0000 (0:00:01.368) 0:02:16.647 ********** 2026-03-09 00:59:08.656188 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.656196 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.656204 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.656212 | orchestrator | 2026-03-09 00:59:08.656223 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-09 00:59:08.656237 | orchestrator | Monday 09 March 2026 00:53:59 +0000 (0:00:01.922) 0:02:18.569 ********** 2026-03-09 00:59:08.656257 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.656270 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.656295 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.656333 | orchestrator | 2026-03-09 00:59:08.656341 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-09 00:59:08.656385 | orchestrator | Monday 09 March 2026 00:54:01 +0000 (0:00:02.337) 0:02:20.907 ********** 2026-03-09 00:59:08.656393 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.656405 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.656413 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.656421 | orchestrator | 2026-03-09 00:59:08.656429 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-09 00:59:08.656437 | orchestrator | Monday 09 March 2026 00:54:02 +0000 (0:00:00.490) 0:02:21.398 ********** 2026-03-09 00:59:08.656445 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.656453 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.656460 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.656468 | orchestrator | 2026-03-09 00:59:08.656476 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-09 00:59:08.656484 | orchestrator | Monday 09 March 2026 00:54:02 +0000 (0:00:00.319) 0:02:21.718 ********** 2026-03-09 00:59:08.656492 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.656500 | orchestrator | 2026-03-09 00:59:08.656508 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-09 00:59:08.656516 | orchestrator | Monday 09 March 2026 00:54:03 +0000 (0:00:00.810) 0:02:22.528 ********** 2026-03-09 00:59:08.656525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 00:59:08.656535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:59:08.656549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 00:59:08.656618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:59:08.656627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 00:59:08.656644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:59:08.656662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656827 | orchestrator | 2026-03-09 00:59:08.656841 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-09 00:59:08.656854 | orchestrator | Monday 09 March 2026 00:54:08 +0000 (0:00:05.320) 0:02:27.848 ********** 2026-03-09 00:59:08.656862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 00:59:08.656876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:59:08.656890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.656938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 00:59:08.656955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 00:59:08.656964 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.656973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:59:08.656986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:59:08.656995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.657003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.657012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.657029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.657038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.657051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.657063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.657078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.657091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.657110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.657120 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.657128 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.657136 | orchestrator | 2026-03-09 00:59:08.657144 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-09 00:59:08.657157 | orchestrator | Monday 09 March 2026 00:54:09 +0000 (0:00:01.135) 0:02:28.984 ********** 2026-03-09 00:59:08.657165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-09 00:59:08.657174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-09 00:59:08.657187 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.657196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-09 00:59:08.657204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-09 00:59:08.657212 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.657220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-09 00:59:08.657228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-09 00:59:08.657236 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.657244 | orchestrator | 2026-03-09 00:59:08.657252 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-09 00:59:08.657260 | orchestrator | Monday 09 March 2026 00:54:10 +0000 (0:00:01.161) 0:02:30.146 ********** 2026-03-09 00:59:08.657268 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.657276 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.657284 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.657292 | orchestrator | 2026-03-09 00:59:08.657300 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-09 00:59:08.657308 | orchestrator | Monday 09 March 2026 00:54:12 +0000 (0:00:02.055) 0:02:32.201 ********** 2026-03-09 00:59:08.657316 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.657324 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.657332 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.657340 | orchestrator | 2026-03-09 00:59:08.657347 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-09 00:59:08.657353 | orchestrator | Monday 09 March 2026 00:54:15 +0000 (0:00:02.509) 0:02:34.710 ********** 2026-03-09 00:59:08.657360 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.657367 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.657374 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.657381 | orchestrator | 2026-03-09 00:59:08.657387 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-09 00:59:08.657394 | orchestrator | Monday 09 March 2026 00:54:16 +0000 (0:00:00.655) 0:02:35.366 ********** 2026-03-09 00:59:08.657401 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.657408 | orchestrator | 2026-03-09 00:59:08.657414 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-09 00:59:08.657421 | orchestrator | Monday 09 March 2026 00:54:17 +0000 (0:00:01.222) 0:02:36.588 ********** 2026-03-09 00:59:08.657438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 00:59:08.657452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:59:08.657933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 00:59:08.657964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:59:08.657979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 00:59:08.657991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:59:08.658003 | orchestrator | 2026-03-09 00:59:08.658011 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-09 00:59:08.658065 | orchestrator | Monday 09 March 2026 00:54:24 +0000 (0:00:07.604) 0:02:44.193 ********** 2026-03-09 00:59:08.658109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 00:59:08.658127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:59:08.658178 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.658192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 00:59:08.658204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:59:08.658249 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.658258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 00:59:08.658272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:59:08.658284 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.658292 | orchestrator | 2026-03-09 00:59:08.658314 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-09 00:59:08.658321 | orchestrator | Monday 09 March 2026 00:54:29 +0000 (0:00:04.188) 0:02:48.381 ********** 2026-03-09 00:59:08.658328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:59:08.658336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:59:08.658343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:59:08.658354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:59:08.658362 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.658368 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.658375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:59:08.658388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:59:08.658395 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.658402 | orchestrator | 2026-03-09 00:59:08.658462 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-09 00:59:08.658470 | orchestrator | Monday 09 March 2026 00:54:33 +0000 (0:00:03.895) 0:02:52.277 ********** 2026-03-09 00:59:08.658477 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.658484 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.658491 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.658497 | orchestrator | 2026-03-09 00:59:08.658504 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-09 00:59:08.658511 | orchestrator | Monday 09 March 2026 00:54:34 +0000 (0:00:01.382) 0:02:53.659 ********** 2026-03-09 00:59:08.658518 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.658524 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.658531 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.658538 | orchestrator | 2026-03-09 00:59:08.658545 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-09 00:59:08.658551 | orchestrator | Monday 09 March 2026 00:54:36 +0000 (0:00:02.487) 0:02:56.146 ********** 2026-03-09 00:59:08.658563 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.658580 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.658592 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.658603 | orchestrator | 2026-03-09 00:59:08.658615 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-09 00:59:08.658627 | orchestrator | Monday 09 March 2026 00:54:37 +0000 (0:00:00.604) 0:02:56.751 ********** 2026-03-09 00:59:08.658638 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.658650 | orchestrator | 2026-03-09 00:59:08.658662 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-09 00:59:08.658674 | orchestrator | Monday 09 March 2026 00:54:38 +0000 (0:00:00.949) 0:02:57.701 ********** 2026-03-09 00:59:08.658687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 00:59:08.658703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 00:59:08.658720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 00:59:08.658735 | orchestrator | 2026-03-09 00:59:08.658827 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-09 00:59:08.658837 | orchestrator | Monday 09 March 2026 00:54:42 +0000 (0:00:04.008) 0:03:01.709 ********** 2026-03-09 00:59:08.658846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 00:59:08.658855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 00:59:08.658868 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.658877 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.658885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 00:59:08.658893 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.658902 | orchestrator | 2026-03-09 00:59:08.658910 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-09 00:59:08.658918 | orchestrator | Monday 09 March 2026 00:54:43 +0000 (0:00:00.753) 0:03:02.463 ********** 2026-03-09 00:59:08.658927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-09 00:59:08.658934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-09 00:59:08.658941 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.658948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-09 00:59:08.658962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-09 00:59:08.658969 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.658976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-09 00:59:08.658987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-09 00:59:08.658995 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.659002 | orchestrator | 2026-03-09 00:59:08.659008 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-09 00:59:08.659030 | orchestrator | Monday 09 March 2026 00:54:43 +0000 (0:00:00.717) 0:03:03.180 ********** 2026-03-09 00:59:08.659037 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.659044 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.659051 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.659058 | orchestrator | 2026-03-09 00:59:08.659064 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-09 00:59:08.659071 | orchestrator | Monday 09 March 2026 00:54:45 +0000 (0:00:01.396) 0:03:04.576 ********** 2026-03-09 00:59:08.659078 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.659085 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.659091 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.659097 | orchestrator | 2026-03-09 00:59:08.659103 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-09 00:59:08.659110 | orchestrator | Monday 09 March 2026 00:54:47 +0000 (0:00:02.165) 0:03:06.742 ********** 2026-03-09 00:59:08.659116 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.659122 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.659128 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.659135 | orchestrator | 2026-03-09 00:59:08.659141 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-09 00:59:08.659147 | orchestrator | Monday 09 March 2026 00:54:48 +0000 (0:00:00.601) 0:03:07.344 ********** 2026-03-09 00:59:08.659154 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.659160 | orchestrator | 2026-03-09 00:59:08.659166 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-09 00:59:08.659173 | orchestrator | Monday 09 March 2026 00:54:49 +0000 (0:00:00.927) 0:03:08.271 ********** 2026-03-09 00:59:08.659185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 00:59:08.659203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 00:59:08.659215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 00:59:08.659227 | orchestrator | 2026-03-09 00:59:08.659233 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-09 00:59:08.659240 | orchestrator | Monday 09 March 2026 00:54:52 +0000 (0:00:03.675) 0:03:11.947 ********** 2026-03-09 00:59:08.659257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 00:59:08.659265 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.659283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 00:59:08.659298 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.659305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 00:59:08.659317 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.659323 | orchestrator | 2026-03-09 00:59:08.659330 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-09 00:59:08.659336 | orchestrator | Monday 09 March 2026 00:54:54 +0000 (0:00:01.306) 0:03:13.253 ********** 2026-03-09 00:59:08.659343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-09 00:59:08.659351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:59:08.659359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-09 00:59:08.659366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:59:08.659377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-09 00:59:08.659383 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.659405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-09 00:59:08.659412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:59:08.659418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-09 00:59:08.659425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:59:08.659431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-09 00:59:08.659438 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.659444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-09 00:59:08.659458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:59:08.659465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-09 00:59:08.659472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:59:08.659478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-09 00:59:08.659485 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.659491 | orchestrator | 2026-03-09 00:59:08.659497 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-09 00:59:08.659504 | orchestrator | Monday 09 March 2026 00:54:55 +0000 (0:00:01.103) 0:03:14.356 ********** 2026-03-09 00:59:08.659510 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.659516 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.659523 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.659529 | orchestrator | 2026-03-09 00:59:08.659535 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-09 00:59:08.659542 | orchestrator | Monday 09 March 2026 00:54:56 +0000 (0:00:01.388) 0:03:15.745 ********** 2026-03-09 00:59:08.659548 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.659554 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.659561 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.659567 | orchestrator | 2026-03-09 00:59:08.659573 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-09 00:59:08.659580 | orchestrator | Monday 09 March 2026 00:54:58 +0000 (0:00:02.230) 0:03:17.976 ********** 2026-03-09 00:59:08.659586 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.659592 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.659599 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.659605 | orchestrator | 2026-03-09 00:59:08.659611 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-09 00:59:08.659621 | orchestrator | Monday 09 March 2026 00:54:59 +0000 (0:00:00.344) 0:03:18.320 ********** 2026-03-09 00:59:08.659628 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.659634 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.659640 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.659647 | orchestrator | 2026-03-09 00:59:08.659653 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-09 00:59:08.659659 | orchestrator | Monday 09 March 2026 00:54:59 +0000 (0:00:00.629) 0:03:18.950 ********** 2026-03-09 00:59:08.659666 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.659672 | orchestrator | 2026-03-09 00:59:08.659678 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-09 00:59:08.659685 | orchestrator | Monday 09 March 2026 00:55:00 +0000 (0:00:01.091) 0:03:20.041 ********** 2026-03-09 00:59:08.659692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 00:59:08.659706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:59:08.659715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 00:59:08.659722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:59:08.659733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:59:08.659741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:59:08.659779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 00:59:08.659791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:59:08.659803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:59:08.659812 | orchestrator | 2026-03-09 00:59:08.659829 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-09 00:59:08.659839 | orchestrator | Monday 09 March 2026 00:55:05 +0000 (0:00:04.633) 0:03:24.675 ********** 2026-03-09 00:59:08.659857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 00:59:08.659876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:59:08.659886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:59:08.659896 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.659912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 00:59:08.659924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:59:08.659935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:59:08.659945 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.659961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 00:59:08.659979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:59:08.659995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:59:08.660007 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.660018 | orchestrator | 2026-03-09 00:59:08.660028 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-09 00:59:08.660039 | orchestrator | Monday 09 March 2026 00:55:06 +0000 (0:00:00.835) 0:03:25.511 ********** 2026-03-09 00:59:08.660049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-09 00:59:08.660061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-09 00:59:08.660072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-09 00:59:08.660083 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.660093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-09 00:59:08.660103 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.660113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-09 00:59:08.660588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-09 00:59:08.660609 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.660616 | orchestrator | 2026-03-09 00:59:08.660622 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-09 00:59:08.660628 | orchestrator | Monday 09 March 2026 00:55:07 +0000 (0:00:01.062) 0:03:26.573 ********** 2026-03-09 00:59:08.660634 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.660641 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.660647 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.660653 | orchestrator | 2026-03-09 00:59:08.660659 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-09 00:59:08.660665 | orchestrator | Monday 09 March 2026 00:55:08 +0000 (0:00:01.619) 0:03:28.193 ********** 2026-03-09 00:59:08.660671 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.660678 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.660684 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.660690 | orchestrator | 2026-03-09 00:59:08.660696 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-09 00:59:08.660702 | orchestrator | Monday 09 March 2026 00:55:11 +0000 (0:00:02.367) 0:03:30.561 ********** 2026-03-09 00:59:08.660709 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.660715 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.660721 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.660727 | orchestrator | 2026-03-09 00:59:08.660733 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-09 00:59:08.660739 | orchestrator | Monday 09 March 2026 00:55:12 +0000 (0:00:00.794) 0:03:31.356 ********** 2026-03-09 00:59:08.660746 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.660752 | orchestrator | 2026-03-09 00:59:08.660780 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-09 00:59:08.660791 | orchestrator | Monday 09 March 2026 00:55:13 +0000 (0:00:01.407) 0:03:32.764 ********** 2026-03-09 00:59:08.660810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 00:59:08.660818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.660833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 00:59:08.660889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.660900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 00:59:08.660911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.660917 | orchestrator | 2026-03-09 00:59:08.660924 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-09 00:59:08.660930 | orchestrator | Monday 09 March 2026 00:55:17 +0000 (0:00:04.306) 0:03:37.070 ********** 2026-03-09 00:59:08.660937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 00:59:08.660949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.660956 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.661005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 00:59:08.661298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.661312 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.661346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 00:59:08.661366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.661373 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.661379 | orchestrator | 2026-03-09 00:59:08.661386 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-09 00:59:08.661393 | orchestrator | Monday 09 March 2026 00:55:18 +0000 (0:00:01.119) 0:03:38.190 ********** 2026-03-09 00:59:08.661399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-09 00:59:08.661406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-09 00:59:08.661475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-09 00:59:08.661484 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.661491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-09 00:59:08.661498 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.661504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-09 00:59:08.661511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-09 00:59:08.661517 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.661523 | orchestrator | 2026-03-09 00:59:08.661530 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-09 00:59:08.661536 | orchestrator | Monday 09 March 2026 00:55:19 +0000 (0:00:00.955) 0:03:39.146 ********** 2026-03-09 00:59:08.661542 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.661549 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.661555 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.661561 | orchestrator | 2026-03-09 00:59:08.661567 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-09 00:59:08.661574 | orchestrator | Monday 09 March 2026 00:55:21 +0000 (0:00:01.364) 0:03:40.510 ********** 2026-03-09 00:59:08.661580 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.661586 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.661592 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.661599 | orchestrator | 2026-03-09 00:59:08.661605 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-09 00:59:08.661611 | orchestrator | Monday 09 March 2026 00:55:23 +0000 (0:00:02.385) 0:03:42.896 ********** 2026-03-09 00:59:08.661618 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.661624 | orchestrator | 2026-03-09 00:59:08.661630 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-09 00:59:08.661642 | orchestrator | Monday 09 March 2026 00:55:25 +0000 (0:00:01.430) 0:03:44.326 ********** 2026-03-09 00:59:08.662199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-09 00:59:08.662223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-09 00:59:08.662412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-09 00:59:08.662433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662526 | orchestrator | 2026-03-09 00:59:08.662533 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-09 00:59:08.662539 | orchestrator | Monday 09 March 2026 00:55:29 +0000 (0:00:04.070) 0:03:48.397 ********** 2026-03-09 00:59:08.662549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-09 00:59:08.662556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662579 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.662585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-09 00:59:08.662599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662618 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.662630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-09 00:59:08.662636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.662663 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.662669 | orchestrator | 2026-03-09 00:59:08.662675 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-09 00:59:08.662681 | orchestrator | Monday 09 March 2026 00:55:30 +0000 (0:00:00.848) 0:03:49.246 ********** 2026-03-09 00:59:08.662687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-09 00:59:08.662694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-09 00:59:08.662701 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.662707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-09 00:59:08.662713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-09 00:59:08.662719 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.662725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-09 00:59:08.662731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-09 00:59:08.662737 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.662743 | orchestrator | 2026-03-09 00:59:08.662749 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-09 00:59:08.662773 | orchestrator | Monday 09 March 2026 00:55:31 +0000 (0:00:01.496) 0:03:50.743 ********** 2026-03-09 00:59:08.662780 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.662786 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.662792 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.662797 | orchestrator | 2026-03-09 00:59:08.662803 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-09 00:59:08.662809 | orchestrator | Monday 09 March 2026 00:55:32 +0000 (0:00:01.458) 0:03:52.201 ********** 2026-03-09 00:59:08.662819 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.662825 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.662835 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.662841 | orchestrator | 2026-03-09 00:59:08.662847 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-09 00:59:08.662853 | orchestrator | Monday 09 March 2026 00:55:35 +0000 (0:00:02.179) 0:03:54.380 ********** 2026-03-09 00:59:08.662859 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.662865 | orchestrator | 2026-03-09 00:59:08.662871 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-09 00:59:08.662877 | orchestrator | Monday 09 March 2026 00:55:36 +0000 (0:00:01.446) 0:03:55.826 ********** 2026-03-09 00:59:08.662883 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-09 00:59:08.662889 | orchestrator | 2026-03-09 00:59:08.662895 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-09 00:59:08.662901 | orchestrator | Monday 09 March 2026 00:55:39 +0000 (0:00:02.915) 0:03:58.742 ********** 2026-03-09 00:59:08.662912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:59:08.662920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:59:08.662927 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.662938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:59:08.662949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:59:08.662955 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.662965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:59:08.662972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:59:08.662985 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.662992 | orchestrator | 2026-03-09 00:59:08.662998 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-09 00:59:08.663004 | orchestrator | Monday 09 March 2026 00:55:42 +0000 (0:00:02.898) 0:04:01.640 ********** 2026-03-09 00:59:08.663013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:59:08.663020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:59:08.663034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:59:08.663041 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.663047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:59:08.663053 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.663065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:59:08.663072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:59:08.663081 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.663087 | orchestrator | 2026-03-09 00:59:08.663093 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-09 00:59:08.663099 | orchestrator | Monday 09 March 2026 00:55:45 +0000 (0:00:02.809) 0:04:04.450 ********** 2026-03-09 00:59:08.663109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:59:08.663116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:59:08.663122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:59:08.663129 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.663139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:59:08.663147 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.663154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:59:08.663162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:59:08.663173 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.663180 | orchestrator | 2026-03-09 00:59:08.663188 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-09 00:59:08.663195 | orchestrator | Monday 09 March 2026 00:55:48 +0000 (0:00:02.964) 0:04:07.415 ********** 2026-03-09 00:59:08.663202 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.663210 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.663217 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.663223 | orchestrator | 2026-03-09 00:59:08.663231 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-09 00:59:08.663238 | orchestrator | Monday 09 March 2026 00:55:50 +0000 (0:00:02.072) 0:04:09.487 ********** 2026-03-09 00:59:08.663245 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.663252 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.663259 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.663266 | orchestrator | 2026-03-09 00:59:08.663273 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-09 00:59:08.663280 | orchestrator | Monday 09 March 2026 00:55:51 +0000 (0:00:01.560) 0:04:11.048 ********** 2026-03-09 00:59:08.663288 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.663298 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.663306 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.663313 | orchestrator | 2026-03-09 00:59:08.663320 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-09 00:59:08.663327 | orchestrator | Monday 09 March 2026 00:55:52 +0000 (0:00:00.348) 0:04:11.397 ********** 2026-03-09 00:59:08.663335 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.663345 | orchestrator | 2026-03-09 00:59:08.663354 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-09 00:59:08.663363 | orchestrator | Monday 09 March 2026 00:55:53 +0000 (0:00:01.469) 0:04:12.866 ********** 2026-03-09 00:59:08.663374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-09 00:59:08.663397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-09 00:59:08.663409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-09 00:59:08.663426 | orchestrator | 2026-03-09 00:59:08.663436 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-09 00:59:08.663445 | orchestrator | Monday 09 March 2026 00:55:55 +0000 (0:00:01.534) 0:04:14.400 ********** 2026-03-09 00:59:08.663453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-09 00:59:08.663464 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.663571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-09 00:59:08.663585 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.663592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-09 00:59:08.663598 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.663604 | orchestrator | 2026-03-09 00:59:08.663610 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-09 00:59:08.663616 | orchestrator | Monday 09 March 2026 00:55:55 +0000 (0:00:00.529) 0:04:14.929 ********** 2026-03-09 00:59:08.663623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-09 00:59:08.663643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-09 00:59:08.663649 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.663655 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.663661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-09 00:59:08.663667 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.663673 | orchestrator | 2026-03-09 00:59:08.663679 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-09 00:59:08.663686 | orchestrator | Monday 09 March 2026 00:55:56 +0000 (0:00:00.976) 0:04:15.906 ********** 2026-03-09 00:59:08.663692 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.663698 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.663704 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.663710 | orchestrator | 2026-03-09 00:59:08.663716 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-09 00:59:08.663722 | orchestrator | Monday 09 March 2026 00:55:57 +0000 (0:00:00.534) 0:04:16.441 ********** 2026-03-09 00:59:08.663728 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.663733 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.663739 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.663745 | orchestrator | 2026-03-09 00:59:08.663751 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-09 00:59:08.663810 | orchestrator | Monday 09 March 2026 00:55:58 +0000 (0:00:01.542) 0:04:17.983 ********** 2026-03-09 00:59:08.663817 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.663823 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.663829 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.663835 | orchestrator | 2026-03-09 00:59:08.663841 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-09 00:59:08.663847 | orchestrator | Monday 09 March 2026 00:55:59 +0000 (0:00:00.355) 0:04:18.339 ********** 2026-03-09 00:59:08.663853 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.663860 | orchestrator | 2026-03-09 00:59:08.663866 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-09 00:59:08.663872 | orchestrator | Monday 09 March 2026 00:56:00 +0000 (0:00:01.805) 0:04:20.145 ********** 2026-03-09 00:59:08.663929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 00:59:08.663940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.663958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.663966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.663972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-09 00:59:08.664021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.664043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.664050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:59:08.664067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-09 00:59:08.664120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.664129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:59:08.664152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:59:08.664158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 00:59:08.664204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 00:59:08.664244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-09 00:59:08.664250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.664328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.664335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-09 00:59:08.664401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:59:08.664408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.664450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.664464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-09 00:59:08.664518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.664534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:59:08.664544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-09 00:59:08.664609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.664618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:59:08.664635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:59:08.664641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:59:08.664719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:59:08.664729 | orchestrator | 2026-03-09 00:59:08.664735 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-09 00:59:08.664741 | orchestrator | Monday 09 March 2026 00:56:05 +0000 (0:00:04.676) 0:04:24.822 ********** 2026-03-09 00:59:08.664748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 00:59:08.664774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 00:59:08.664850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-09 00:59:08.664856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.664929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.664944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-09 00:59:08.664953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.664973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:59:08.665021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.665030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.665037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.665043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-09 00:59:08.665053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.665060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.665072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:59:08.665117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.665126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:59:08.665137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.665143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-09 00:59:08.665154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:59:08.665160 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.665166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.665211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.665220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:59:08.665230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:59:08.665236 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.665242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 00:59:08.665253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.665298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.665306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.665316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-09 00:59:08.665323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.665333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.665339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.665385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.665395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:59:08.665401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.665411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-09 00:59:08.665421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:59:08.665430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.665467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:59:08.665486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:59:08.665495 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.665504 | orchestrator | 2026-03-09 00:59:08.665513 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-09 00:59:08.665522 | orchestrator | Monday 09 March 2026 00:56:07 +0000 (0:00:01.820) 0:04:26.642 ********** 2026-03-09 00:59:08.665533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-09 00:59:08.665541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-09 00:59:08.665549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-09 00:59:08.665566 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.665579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-09 00:59:08.665589 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.665599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-09 00:59:08.665609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-09 00:59:08.665620 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.665630 | orchestrator | 2026-03-09 00:59:08.665639 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-09 00:59:08.665648 | orchestrator | Monday 09 March 2026 00:56:09 +0000 (0:00:02.367) 0:04:29.009 ********** 2026-03-09 00:59:08.665657 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.665667 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.665677 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.665686 | orchestrator | 2026-03-09 00:59:08.665696 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-09 00:59:08.665705 | orchestrator | Monday 09 March 2026 00:56:11 +0000 (0:00:01.362) 0:04:30.372 ********** 2026-03-09 00:59:08.665715 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.665724 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.665730 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.665735 | orchestrator | 2026-03-09 00:59:08.665741 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-09 00:59:08.665747 | orchestrator | Monday 09 March 2026 00:56:13 +0000 (0:00:02.280) 0:04:32.652 ********** 2026-03-09 00:59:08.665775 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.665782 | orchestrator | 2026-03-09 00:59:08.665788 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-09 00:59:08.665794 | orchestrator | Monday 09 March 2026 00:56:14 +0000 (0:00:01.332) 0:04:33.985 ********** 2026-03-09 00:59:08.665828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.665836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.665852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.665858 | orchestrator | 2026-03-09 00:59:08.665864 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-09 00:59:08.665870 | orchestrator | Monday 09 March 2026 00:56:18 +0000 (0:00:04.237) 0:04:38.222 ********** 2026-03-09 00:59:08.665877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.665883 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.665906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.665913 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.665919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.665933 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.665939 | orchestrator | 2026-03-09 00:59:08.665944 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-09 00:59:08.665950 | orchestrator | Monday 09 March 2026 00:56:19 +0000 (0:00:00.585) 0:04:38.807 ********** 2026-03-09 00:59:08.665956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-09 00:59:08.665962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-09 00:59:08.665969 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.665975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-09 00:59:08.665984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-09 00:59:08.665990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-09 00:59:08.665996 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.666002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-09 00:59:08.666008 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.666037 | orchestrator | 2026-03-09 00:59:08.666046 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-09 00:59:08.666053 | orchestrator | Monday 09 March 2026 00:56:20 +0000 (0:00:00.867) 0:04:39.675 ********** 2026-03-09 00:59:08.666060 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.666067 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.666074 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.666081 | orchestrator | 2026-03-09 00:59:08.666088 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-09 00:59:08.666095 | orchestrator | Monday 09 March 2026 00:56:22 +0000 (0:00:02.110) 0:04:41.785 ********** 2026-03-09 00:59:08.666102 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.666109 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.666116 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.666123 | orchestrator | 2026-03-09 00:59:08.666130 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-09 00:59:08.666137 | orchestrator | Monday 09 March 2026 00:56:24 +0000 (0:00:01.986) 0:04:43.772 ********** 2026-03-09 00:59:08.666144 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.666151 | orchestrator | 2026-03-09 00:59:08.666158 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-09 00:59:08.666165 | orchestrator | Monday 09 March 2026 00:56:26 +0000 (0:00:01.738) 0:04:45.510 ********** 2026-03-09 00:59:08.666196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.666211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.666222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.666229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.666236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.666264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.666273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.666282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.666289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.666296 | orchestrator | 2026-03-09 00:59:08.666302 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-09 00:59:08.666308 | orchestrator | Monday 09 March 2026 00:56:30 +0000 (0:00:04.617) 0:04:50.128 ********** 2026-03-09 00:59:08.666331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.666343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.666349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.666356 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.666365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.666372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.666399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.666407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.666414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.666420 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.666429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.666435 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.666442 | orchestrator | 2026-03-09 00:59:08.666448 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-09 00:59:08.666454 | orchestrator | Monday 09 March 2026 00:56:32 +0000 (0:00:01.502) 0:04:51.631 ********** 2026-03-09 00:59:08.666460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-09 00:59:08.666468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-09 00:59:08.666474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-09 00:59:08.666488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-09 00:59:08.666498 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.666507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-09 00:59:08.666516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-09 00:59:08.666525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-09 00:59:08.666557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-09 00:59:08.666567 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.666575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-09 00:59:08.666585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-09 00:59:08.666594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-09 00:59:08.666602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-09 00:59:08.666611 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.666621 | orchestrator | 2026-03-09 00:59:08.666630 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-09 00:59:08.666640 | orchestrator | Monday 09 March 2026 00:56:33 +0000 (0:00:01.014) 0:04:52.645 ********** 2026-03-09 00:59:08.666649 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.666657 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.666666 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.666676 | orchestrator | 2026-03-09 00:59:08.666685 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-09 00:59:08.666694 | orchestrator | Monday 09 March 2026 00:56:34 +0000 (0:00:01.384) 0:04:54.029 ********** 2026-03-09 00:59:08.666704 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.666713 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.666722 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.666732 | orchestrator | 2026-03-09 00:59:08.666741 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-09 00:59:08.666751 | orchestrator | Monday 09 March 2026 00:56:37 +0000 (0:00:02.431) 0:04:56.461 ********** 2026-03-09 00:59:08.666795 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.666804 | orchestrator | 2026-03-09 00:59:08.666810 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-09 00:59:08.666816 | orchestrator | Monday 09 March 2026 00:56:39 +0000 (0:00:01.885) 0:04:58.346 ********** 2026-03-09 00:59:08.666829 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item=nova-novncproxy) 2026-03-09 00:59:08.666835 | orchestrator | 2026-03-09 00:59:08.666841 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-09 00:59:08.666847 | orchestrator | Monday 09 March 2026 00:56:40 +0000 (0:00:00.892) 0:04:59.238 ********** 2026-03-09 00:59:08.666853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-09 00:59:08.666861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-09 00:59:08.666867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-09 00:59:08.666873 | orchestrator | 2026-03-09 00:59:08.666908 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-09 00:59:08.666916 | orchestrator | Monday 09 March 2026 00:56:45 +0000 (0:00:05.125) 0:05:04.364 ********** 2026-03-09 00:59:08.666922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:59:08.666929 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.666935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:59:08.666941 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.666947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:59:08.666958 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.666964 | orchestrator | 2026-03-09 00:59:08.666970 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-09 00:59:08.666976 | orchestrator | Monday 09 March 2026 00:56:46 +0000 (0:00:01.156) 0:05:05.520 ********** 2026-03-09 00:59:08.666985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:59:08.666992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:59:08.666998 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.667004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:59:08.667015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:59:08.667021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:59:08.667027 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.667033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:59:08.667039 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.667045 | orchestrator | 2026-03-09 00:59:08.667051 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-09 00:59:08.667057 | orchestrator | Monday 09 March 2026 00:56:48 +0000 (0:00:01.776) 0:05:07.297 ********** 2026-03-09 00:59:08.667063 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.667069 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.667076 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.667086 | orchestrator | 2026-03-09 00:59:08.667096 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-09 00:59:08.667105 | orchestrator | Monday 09 March 2026 00:56:50 +0000 (0:00:02.667) 0:05:09.965 ********** 2026-03-09 00:59:08.667115 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.667125 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.667135 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.667145 | orchestrator | 2026-03-09 00:59:08.667171 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-09 00:59:08.667179 | orchestrator | Monday 09 March 2026 00:56:54 +0000 (0:00:03.427) 0:05:13.393 ********** 2026-03-09 00:59:08.667185 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-09 00:59:08.667191 | orchestrator | 2026-03-09 00:59:08.667197 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-09 00:59:08.667203 | orchestrator | Monday 09 March 2026 00:56:55 +0000 (0:00:01.623) 0:05:15.016 ********** 2026-03-09 00:59:08.667209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:59:08.667223 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.667229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:59:08.667235 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.667245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:59:08.667251 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.667257 | orchestrator | 2026-03-09 00:59:08.667263 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-09 00:59:08.667269 | orchestrator | Monday 09 March 2026 00:56:57 +0000 (0:00:01.362) 0:05:16.379 ********** 2026-03-09 00:59:08.667275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:59:08.667281 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.667287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:59:08.667293 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.667300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:59:08.667306 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.667312 | orchestrator | 2026-03-09 00:59:08.667334 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-09 00:59:08.667341 | orchestrator | Monday 09 March 2026 00:56:58 +0000 (0:00:01.701) 0:05:18.080 ********** 2026-03-09 00:59:08.667347 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.667357 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.667363 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.667369 | orchestrator | 2026-03-09 00:59:08.667375 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-09 00:59:08.667381 | orchestrator | Monday 09 March 2026 00:57:00 +0000 (0:00:02.113) 0:05:20.194 ********** 2026-03-09 00:59:08.667387 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.667393 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.667399 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.667405 | orchestrator | 2026-03-09 00:59:08.667411 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-09 00:59:08.667416 | orchestrator | Monday 09 March 2026 00:57:03 +0000 (0:00:02.550) 0:05:22.744 ********** 2026-03-09 00:59:08.667422 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.667428 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.667434 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.667440 | orchestrator | 2026-03-09 00:59:08.667446 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-09 00:59:08.667452 | orchestrator | Monday 09 March 2026 00:57:06 +0000 (0:00:03.280) 0:05:26.024 ********** 2026-03-09 00:59:08.667458 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-09 00:59:08.667464 | orchestrator | 2026-03-09 00:59:08.667470 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-09 00:59:08.667476 | orchestrator | Monday 09 March 2026 00:57:07 +0000 (0:00:00.969) 0:05:26.994 ********** 2026-03-09 00:59:08.667482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:59:08.667488 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.667498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:59:08.667504 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.667510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:59:08.667516 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.667522 | orchestrator | 2026-03-09 00:59:08.667528 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-09 00:59:08.667534 | orchestrator | Monday 09 March 2026 00:57:09 +0000 (0:00:01.571) 0:05:28.566 ********** 2026-03-09 00:59:08.667540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:59:08.667550 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.667573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:59:08.667581 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.667587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:59:08.667593 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.667599 | orchestrator | 2026-03-09 00:59:08.667605 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-09 00:59:08.667611 | orchestrator | Monday 09 March 2026 00:57:10 +0000 (0:00:01.595) 0:05:30.161 ********** 2026-03-09 00:59:08.667617 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.667623 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.667629 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.667634 | orchestrator | 2026-03-09 00:59:08.667640 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-09 00:59:08.667646 | orchestrator | Monday 09 March 2026 00:57:12 +0000 (0:00:01.831) 0:05:31.993 ********** 2026-03-09 00:59:08.667653 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.667658 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.667665 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.667671 | orchestrator | 2026-03-09 00:59:08.667677 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-09 00:59:08.667683 | orchestrator | Monday 09 March 2026 00:57:15 +0000 (0:00:02.830) 0:05:34.823 ********** 2026-03-09 00:59:08.667689 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.667695 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.667701 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.667707 | orchestrator | 2026-03-09 00:59:08.667713 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-09 00:59:08.667719 | orchestrator | Monday 09 March 2026 00:57:19 +0000 (0:00:03.795) 0:05:38.619 ********** 2026-03-09 00:59:08.667725 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.667731 | orchestrator | 2026-03-09 00:59:08.667737 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-09 00:59:08.667746 | orchestrator | Monday 09 March 2026 00:57:21 +0000 (0:00:01.880) 0:05:40.499 ********** 2026-03-09 00:59:08.667772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.667786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:59:08.667793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.667819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.667826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.667836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.667843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:59:08.667854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.667880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.667888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.667894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:59:08.667901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.667910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.667921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.667928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.667934 | orchestrator | 2026-03-09 00:59:08.667940 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-09 00:59:08.667946 | orchestrator | Monday 09 March 2026 00:57:25 +0000 (0:00:03.860) 0:05:44.360 ********** 2026-03-09 00:59:08.667970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.667978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:59:08.667988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.667999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.668009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.668044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:59:08.668056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.668066 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.668075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.668081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.668095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.668102 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.668108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.668115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:59:08.668139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.668147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:59:08.668154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:59:08.668167 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.668173 | orchestrator | 2026-03-09 00:59:08.668179 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-09 00:59:08.668188 | orchestrator | Monday 09 March 2026 00:57:26 +0000 (0:00:00.989) 0:05:45.349 ********** 2026-03-09 00:59:08.668194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:59:08.668200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:59:08.668207 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.668213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:59:08.668219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:59:08.668225 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.668230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:59:08.668236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:59:08.668242 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.668248 | orchestrator | 2026-03-09 00:59:08.668257 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-09 00:59:08.668267 | orchestrator | Monday 09 March 2026 00:57:27 +0000 (0:00:01.770) 0:05:47.120 ********** 2026-03-09 00:59:08.668275 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.668284 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.668294 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.668303 | orchestrator | 2026-03-09 00:59:08.668313 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-09 00:59:08.668323 | orchestrator | Monday 09 March 2026 00:57:29 +0000 (0:00:01.510) 0:05:48.631 ********** 2026-03-09 00:59:08.668332 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.668342 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.668352 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.668361 | orchestrator | 2026-03-09 00:59:08.668371 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-09 00:59:08.668412 | orchestrator | Monday 09 March 2026 00:57:31 +0000 (0:00:02.332) 0:05:50.963 ********** 2026-03-09 00:59:08.668420 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.668426 | orchestrator | 2026-03-09 00:59:08.668432 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-09 00:59:08.668438 | orchestrator | Monday 09 March 2026 00:57:33 +0000 (0:00:01.531) 0:05:52.495 ********** 2026-03-09 00:59:08.668445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:59:08.668462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:59:08.668469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:59:08.668476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:59:08.668500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:59:08.668513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:59:08.668520 | orchestrator | 2026-03-09 00:59:08.668526 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-09 00:59:08.668592 | orchestrator | Monday 09 March 2026 00:57:39 +0000 (0:00:06.367) 0:05:58.862 ********** 2026-03-09 00:59:08.668608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 00:59:08.668616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 00:59:08.668642 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.668649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 00:59:08.668661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 00:59:08.668668 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.668688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 00:59:08.668695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 00:59:08.668702 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.668711 | orchestrator | 2026-03-09 00:59:08.668721 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-09 00:59:08.668737 | orchestrator | Monday 09 March 2026 00:57:40 +0000 (0:00:00.686) 0:05:59.549 ********** 2026-03-09 00:59:08.668803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-09 00:59:08.668815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-09 00:59:08.668825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-09 00:59:08.668835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-09 00:59:08.668845 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.668854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-09 00:59:08.668863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-09 00:59:08.668874 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.668884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-09 00:59:08.668893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-09 00:59:08.668909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-09 00:59:08.668919 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.668929 | orchestrator | 2026-03-09 00:59:08.668939 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-09 00:59:08.668949 | orchestrator | Monday 09 March 2026 00:57:41 +0000 (0:00:01.229) 0:06:00.779 ********** 2026-03-09 00:59:08.668959 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.668967 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.668973 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.668979 | orchestrator | 2026-03-09 00:59:08.668986 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-09 00:59:08.668992 | orchestrator | Monday 09 March 2026 00:57:42 +0000 (0:00:00.953) 0:06:01.732 ********** 2026-03-09 00:59:08.668998 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.669004 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.669010 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.669016 | orchestrator | 2026-03-09 00:59:08.669023 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-09 00:59:08.669029 | orchestrator | Monday 09 March 2026 00:57:44 +0000 (0:00:01.632) 0:06:03.364 ********** 2026-03-09 00:59:08.669035 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.669041 | orchestrator | 2026-03-09 00:59:08.669047 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-09 00:59:08.669053 | orchestrator | Monday 09 March 2026 00:57:45 +0000 (0:00:01.818) 0:06:05.183 ********** 2026-03-09 00:59:08.669070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-09 00:59:08.669104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:59:08.669112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-09 00:59:08.669119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:59:08.669135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:59:08.669152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:59:08.669191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-09 00:59:08.669200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:59:08.669207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:59:08.669249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-09 00:59:08.669258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-09 00:59:08.669267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:59:08.669291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-09 00:59:08.669301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-09 00:59:08.669308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:59:08.669329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-09 00:59:08.669345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-09 00:59:08.669352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:59:08.669372 | orchestrator | 2026-03-09 00:59:08.669378 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-09 00:59:08.669384 | orchestrator | Monday 09 March 2026 00:57:50 +0000 (0:00:05.004) 0:06:10.187 ********** 2026-03-09 00:59:08.669395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-09 00:59:08.669406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:59:08.669412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:59:08.669436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-09 00:59:08.669446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-09 00:59:08.669457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-09 00:59:08.669480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:59:08.669489 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.669499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:59:08.669509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:59:08.669550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-09 00:59:08.669567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-09 00:59:08.669578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-09 00:59:08.669592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:59:08.669609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:59:08.669659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:59:08.669665 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.669675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-09 00:59:08.669686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-09 00:59:08.669693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:59:08.669709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:59:08.669715 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.669722 | orchestrator | 2026-03-09 00:59:08.669728 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-09 00:59:08.669734 | orchestrator | Monday 09 March 2026 00:57:52 +0000 (0:00:01.340) 0:06:11.528 ********** 2026-03-09 00:59:08.669741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-09 00:59:08.669747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-09 00:59:08.669807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-09 00:59:08.669818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-09 00:59:08.669824 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.669831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-09 00:59:08.669840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-09 00:59:08.669847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-09 00:59:08.669853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-09 00:59:08.669859 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.669865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-09 00:59:08.669871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-09 00:59:08.669877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-09 00:59:08.669884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-09 00:59:08.669890 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.669896 | orchestrator | 2026-03-09 00:59:08.669902 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-09 00:59:08.669908 | orchestrator | Monday 09 March 2026 00:57:53 +0000 (0:00:01.173) 0:06:12.702 ********** 2026-03-09 00:59:08.669918 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.669925 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.669931 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.669937 | orchestrator | 2026-03-09 00:59:08.669943 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-09 00:59:08.669949 | orchestrator | Monday 09 March 2026 00:57:53 +0000 (0:00:00.504) 0:06:13.207 ********** 2026-03-09 00:59:08.669955 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.669962 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.669968 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.669984 | orchestrator | 2026-03-09 00:59:08.669994 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-09 00:59:08.670004 | orchestrator | Monday 09 March 2026 00:57:55 +0000 (0:00:01.596) 0:06:14.804 ********** 2026-03-09 00:59:08.670039 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.670046 | orchestrator | 2026-03-09 00:59:08.670052 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-09 00:59:08.670059 | orchestrator | Monday 09 March 2026 00:57:57 +0000 (0:00:01.937) 0:06:16.741 ********** 2026-03-09 00:59:08.670067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:59:08.670078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:59:08.670085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:59:08.670091 | orchestrator | 2026-03-09 00:59:08.670097 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-09 00:59:08.670103 | orchestrator | Monday 09 March 2026 00:58:00 +0000 (0:00:02.820) 0:06:19.562 ********** 2026-03-09 00:59:08.670114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:59:08.670127 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.670133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:59:08.670140 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.670152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:59:08.670158 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.670164 | orchestrator | 2026-03-09 00:59:08.670170 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-09 00:59:08.670177 | orchestrator | Monday 09 March 2026 00:58:00 +0000 (0:00:00.481) 0:06:20.043 ********** 2026-03-09 00:59:08.670183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-09 00:59:08.670189 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.670195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-09 00:59:08.670201 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.670207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-09 00:59:08.670218 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.670224 | orchestrator | 2026-03-09 00:59:08.670230 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-09 00:59:08.670236 | orchestrator | Monday 09 March 2026 00:58:01 +0000 (0:00:01.140) 0:06:21.184 ********** 2026-03-09 00:59:08.670242 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.670248 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.670254 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.670260 | orchestrator | 2026-03-09 00:59:08.670266 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-09 00:59:08.670275 | orchestrator | Monday 09 March 2026 00:58:02 +0000 (0:00:00.508) 0:06:21.692 ********** 2026-03-09 00:59:08.670281 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.670288 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.670293 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.670299 | orchestrator | 2026-03-09 00:59:08.670305 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-09 00:59:08.670311 | orchestrator | Monday 09 March 2026 00:58:04 +0000 (0:00:01.550) 0:06:23.243 ********** 2026-03-09 00:59:08.670317 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:59:08.670323 | orchestrator | 2026-03-09 00:59:08.670329 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-09 00:59:08.670335 | orchestrator | Monday 09 March 2026 00:58:05 +0000 (0:00:01.966) 0:06:25.209 ********** 2026-03-09 00:59:08.670341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.670352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.670359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.670373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.670381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.670391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-09 00:59:08.670397 | orchestrator | 2026-03-09 00:59:08.670403 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-09 00:59:08.670409 | orchestrator | Monday 09 March 2026 00:58:12 +0000 (0:00:06.663) 0:06:31.873 ********** 2026-03-09 00:59:08.670415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.670429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.670435 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.670442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.670451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.670457 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.670463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.670473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-09 00:59:08.670480 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.670486 | orchestrator | 2026-03-09 00:59:08.670492 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-09 00:59:08.670501 | orchestrator | Monday 09 March 2026 00:58:13 +0000 (0:00:00.802) 0:06:32.676 ********** 2026-03-09 00:59:08.670508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-09 00:59:08.670514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-09 00:59:08.670520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-09 00:59:08.670526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-09 00:59:08.670533 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.670539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-09 00:59:08.670545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-09 00:59:08.670551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-09 00:59:08.670557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-09 00:59:08.670563 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.670572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-09 00:59:08.670579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-09 00:59:08.670589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-09 00:59:08.670595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-09 00:59:08.670602 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.670607 | orchestrator | 2026-03-09 00:59:08.670613 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-09 00:59:08.670619 | orchestrator | Monday 09 March 2026 00:58:15 +0000 (0:00:02.022) 0:06:34.699 ********** 2026-03-09 00:59:08.670625 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.670631 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.670637 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.670643 | orchestrator | 2026-03-09 00:59:08.670649 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-09 00:59:08.670654 | orchestrator | Monday 09 March 2026 00:58:16 +0000 (0:00:01.450) 0:06:36.149 ********** 2026-03-09 00:59:08.670660 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.670666 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.670672 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.670678 | orchestrator | 2026-03-09 00:59:08.670684 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-09 00:59:08.670690 | orchestrator | Monday 09 March 2026 00:58:19 +0000 (0:00:02.385) 0:06:38.534 ********** 2026-03-09 00:59:08.670698 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.670708 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.670718 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.670727 | orchestrator | 2026-03-09 00:59:08.670737 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-09 00:59:08.670746 | orchestrator | Monday 09 March 2026 00:58:19 +0000 (0:00:00.383) 0:06:38.918 ********** 2026-03-09 00:59:08.670774 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.670785 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.670796 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.670806 | orchestrator | 2026-03-09 00:59:08.670815 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-09 00:59:08.670831 | orchestrator | Monday 09 March 2026 00:58:20 +0000 (0:00:00.389) 0:06:39.307 ********** 2026-03-09 00:59:08.670839 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.670845 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.670851 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.670857 | orchestrator | 2026-03-09 00:59:08.670863 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-09 00:59:08.670869 | orchestrator | Monday 09 March 2026 00:58:20 +0000 (0:00:00.855) 0:06:40.163 ********** 2026-03-09 00:59:08.670874 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.670880 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.670886 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.670892 | orchestrator | 2026-03-09 00:59:08.670899 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-09 00:59:08.670905 | orchestrator | Monday 09 March 2026 00:58:21 +0000 (0:00:00.389) 0:06:40.553 ********** 2026-03-09 00:59:08.670910 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.670916 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.670922 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.670928 | orchestrator | 2026-03-09 00:59:08.670934 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-09 00:59:08.670946 | orchestrator | Monday 09 March 2026 00:58:21 +0000 (0:00:00.361) 0:06:40.915 ********** 2026-03-09 00:59:08.670951 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.670957 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.670963 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.670969 | orchestrator | 2026-03-09 00:59:08.670975 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-09 00:59:08.670981 | orchestrator | Monday 09 March 2026 00:58:22 +0000 (0:00:00.997) 0:06:41.912 ********** 2026-03-09 00:59:08.670987 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.670993 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.670999 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.671005 | orchestrator | 2026-03-09 00:59:08.671011 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-09 00:59:08.671017 | orchestrator | Monday 09 March 2026 00:58:23 +0000 (0:00:00.686) 0:06:42.598 ********** 2026-03-09 00:59:08.671023 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.671028 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.671034 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.671040 | orchestrator | 2026-03-09 00:59:08.671046 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-09 00:59:08.671052 | orchestrator | Monday 09 March 2026 00:58:23 +0000 (0:00:00.395) 0:06:42.994 ********** 2026-03-09 00:59:08.671058 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.671064 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.671069 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.671075 | orchestrator | 2026-03-09 00:59:08.671081 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-09 00:59:08.671087 | orchestrator | Monday 09 March 2026 00:58:24 +0000 (0:00:00.929) 0:06:43.923 ********** 2026-03-09 00:59:08.671093 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.671099 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.671109 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.671115 | orchestrator | 2026-03-09 00:59:08.671121 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-09 00:59:08.671127 | orchestrator | Monday 09 March 2026 00:58:26 +0000 (0:00:01.519) 0:06:45.443 ********** 2026-03-09 00:59:08.671133 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.671139 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.671145 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.671151 | orchestrator | 2026-03-09 00:59:08.671157 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-09 00:59:08.671163 | orchestrator | Monday 09 March 2026 00:58:27 +0000 (0:00:00.993) 0:06:46.437 ********** 2026-03-09 00:59:08.671168 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.671175 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.671180 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.671186 | orchestrator | 2026-03-09 00:59:08.671192 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-09 00:59:08.671198 | orchestrator | Monday 09 March 2026 00:58:37 +0000 (0:00:10.660) 0:06:57.097 ********** 2026-03-09 00:59:08.671204 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.671210 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.671216 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.671221 | orchestrator | 2026-03-09 00:59:08.671227 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-09 00:59:08.671233 | orchestrator | Monday 09 March 2026 00:58:38 +0000 (0:00:00.802) 0:06:57.899 ********** 2026-03-09 00:59:08.671239 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.671245 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.671251 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.671256 | orchestrator | 2026-03-09 00:59:08.671262 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-09 00:59:08.671268 | orchestrator | Monday 09 March 2026 00:58:48 +0000 (0:00:09.354) 0:07:07.254 ********** 2026-03-09 00:59:08.671274 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.671284 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.671290 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.671296 | orchestrator | 2026-03-09 00:59:08.671302 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-09 00:59:08.671308 | orchestrator | Monday 09 March 2026 00:58:51 +0000 (0:00:03.193) 0:07:10.447 ********** 2026-03-09 00:59:08.671314 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:59:08.671320 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:59:08.671327 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:59:08.671332 | orchestrator | 2026-03-09 00:59:08.671338 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-09 00:59:08.671344 | orchestrator | Monday 09 March 2026 00:59:00 +0000 (0:00:09.193) 0:07:19.641 ********** 2026-03-09 00:59:08.671350 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.671356 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.671362 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.671368 | orchestrator | 2026-03-09 00:59:08.671374 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-09 00:59:08.671380 | orchestrator | Monday 09 March 2026 00:59:00 +0000 (0:00:00.419) 0:07:20.061 ********** 2026-03-09 00:59:08.671389 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.671404 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.671414 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.671424 | orchestrator | 2026-03-09 00:59:08.671433 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-09 00:59:08.671443 | orchestrator | Monday 09 March 2026 00:59:01 +0000 (0:00:00.453) 0:07:20.514 ********** 2026-03-09 00:59:08.671453 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.671464 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.671473 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.671482 | orchestrator | 2026-03-09 00:59:08.671492 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-09 00:59:08.671499 | orchestrator | Monday 09 March 2026 00:59:02 +0000 (0:00:00.784) 0:07:21.299 ********** 2026-03-09 00:59:08.671505 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.671511 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.671517 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.671522 | orchestrator | 2026-03-09 00:59:08.671528 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-09 00:59:08.671534 | orchestrator | Monday 09 March 2026 00:59:02 +0000 (0:00:00.385) 0:07:21.684 ********** 2026-03-09 00:59:08.671540 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.671546 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.671552 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.671560 | orchestrator | 2026-03-09 00:59:08.671569 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-09 00:59:08.671579 | orchestrator | Monday 09 March 2026 00:59:02 +0000 (0:00:00.371) 0:07:22.056 ********** 2026-03-09 00:59:08.671587 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:59:08.671596 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:59:08.671604 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:59:08.671613 | orchestrator | 2026-03-09 00:59:08.671623 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-09 00:59:08.671633 | orchestrator | Monday 09 March 2026 00:59:03 +0000 (0:00:00.434) 0:07:22.491 ********** 2026-03-09 00:59:08.671644 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.671654 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.671662 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.671668 | orchestrator | 2026-03-09 00:59:08.671674 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-09 00:59:08.671680 | orchestrator | Monday 09 March 2026 00:59:04 +0000 (0:00:01.394) 0:07:23.886 ********** 2026-03-09 00:59:08.671686 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:59:08.671692 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:59:08.671705 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:59:08.671711 | orchestrator | 2026-03-09 00:59:08.671717 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:59:08.671723 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-09 00:59:08.671734 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-09 00:59:08.671740 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-09 00:59:08.671746 | orchestrator | 2026-03-09 00:59:08.671752 | orchestrator | 2026-03-09 00:59:08.671794 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:59:08.671800 | orchestrator | Monday 09 March 2026 00:59:05 +0000 (0:00:00.905) 0:07:24.791 ********** 2026-03-09 00:59:08.671806 | orchestrator | =============================================================================== 2026-03-09 00:59:08.671812 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.66s 2026-03-09 00:59:08.671818 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.35s 2026-03-09 00:59:08.671824 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.19s 2026-03-09 00:59:08.671830 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 7.86s 2026-03-09 00:59:08.671836 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 7.60s 2026-03-09 00:59:08.671841 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.69s 2026-03-09 00:59:08.671847 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.66s 2026-03-09 00:59:08.671853 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.37s 2026-03-09 00:59:08.671859 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 6.12s 2026-03-09 00:59:08.671865 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.02s 2026-03-09 00:59:08.671871 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.32s 2026-03-09 00:59:08.671877 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.13s 2026-03-09 00:59:08.671883 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.00s 2026-03-09 00:59:08.671889 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 4.81s 2026-03-09 00:59:08.671895 | orchestrator | loadbalancer : Ensuring config directories exist ------------------------ 4.76s 2026-03-09 00:59:08.671901 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.68s 2026-03-09 00:59:08.671907 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.63s 2026-03-09 00:59:08.671913 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.62s 2026-03-09 00:59:08.671919 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.31s 2026-03-09 00:59:08.671925 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 4.24s 2026-03-09 00:59:08.671938 | orchestrator | 2026-03-09 00:59:08 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:08.671944 | orchestrator | 2026-03-09 00:59:08 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:08.671950 | orchestrator | 2026-03-09 00:59:08 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:08.671956 | orchestrator | 2026-03-09 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:11.690745 | orchestrator | 2026-03-09 00:59:11 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:11.691188 | orchestrator | 2026-03-09 00:59:11 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:11.691572 | orchestrator | 2026-03-09 00:59:11 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:11.691605 | orchestrator | 2026-03-09 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:14.736675 | orchestrator | 2026-03-09 00:59:14 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:14.736989 | orchestrator | 2026-03-09 00:59:14 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:14.738210 | orchestrator | 2026-03-09 00:59:14 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:14.738249 | orchestrator | 2026-03-09 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:17.778450 | orchestrator | 2026-03-09 00:59:17 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:17.779227 | orchestrator | 2026-03-09 00:59:17 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:17.780280 | orchestrator | 2026-03-09 00:59:17 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:17.780352 | orchestrator | 2026-03-09 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:20.827219 | orchestrator | 2026-03-09 00:59:20 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:20.827552 | orchestrator | 2026-03-09 00:59:20 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:20.828792 | orchestrator | 2026-03-09 00:59:20 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:20.829859 | orchestrator | 2026-03-09 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:23.868716 | orchestrator | 2026-03-09 00:59:23 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:23.870941 | orchestrator | 2026-03-09 00:59:23 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:23.872257 | orchestrator | 2026-03-09 00:59:23 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:23.873184 | orchestrator | 2026-03-09 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:26.914103 | orchestrator | 2026-03-09 00:59:26 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:26.915715 | orchestrator | 2026-03-09 00:59:26 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:26.917873 | orchestrator | 2026-03-09 00:59:26 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:26.918246 | orchestrator | 2026-03-09 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:29.967038 | orchestrator | 2026-03-09 00:59:29 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:29.967812 | orchestrator | 2026-03-09 00:59:29 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:29.968880 | orchestrator | 2026-03-09 00:59:29 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:29.969274 | orchestrator | 2026-03-09 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:33.014629 | orchestrator | 2026-03-09 00:59:33 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:33.016012 | orchestrator | 2026-03-09 00:59:33 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:33.019766 | orchestrator | 2026-03-09 00:59:33 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:33.019847 | orchestrator | 2026-03-09 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:36.053079 | orchestrator | 2026-03-09 00:59:36 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:36.053193 | orchestrator | 2026-03-09 00:59:36 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:36.054316 | orchestrator | 2026-03-09 00:59:36 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:36.054366 | orchestrator | 2026-03-09 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:39.098448 | orchestrator | 2026-03-09 00:59:39 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:39.099105 | orchestrator | 2026-03-09 00:59:39 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:39.102081 | orchestrator | 2026-03-09 00:59:39 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:39.102149 | orchestrator | 2026-03-09 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:42.158547 | orchestrator | 2026-03-09 00:59:42 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:42.159578 | orchestrator | 2026-03-09 00:59:42 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:42.160980 | orchestrator | 2026-03-09 00:59:42 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:42.161127 | orchestrator | 2026-03-09 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:45.206068 | orchestrator | 2026-03-09 00:59:45 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:45.207101 | orchestrator | 2026-03-09 00:59:45 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:45.210236 | orchestrator | 2026-03-09 00:59:45 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:45.210481 | orchestrator | 2026-03-09 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:48.249445 | orchestrator | 2026-03-09 00:59:48 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:48.249532 | orchestrator | 2026-03-09 00:59:48 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:48.251931 | orchestrator | 2026-03-09 00:59:48 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:48.251991 | orchestrator | 2026-03-09 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:51.290778 | orchestrator | 2026-03-09 00:59:51 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:51.292478 | orchestrator | 2026-03-09 00:59:51 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:51.294930 | orchestrator | 2026-03-09 00:59:51 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:51.294964 | orchestrator | 2026-03-09 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:54.338263 | orchestrator | 2026-03-09 00:59:54 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:54.341407 | orchestrator | 2026-03-09 00:59:54 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:54.342229 | orchestrator | 2026-03-09 00:59:54 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:54.342291 | orchestrator | 2026-03-09 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:57.386431 | orchestrator | 2026-03-09 00:59:57 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 00:59:57.388054 | orchestrator | 2026-03-09 00:59:57 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 00:59:57.390298 | orchestrator | 2026-03-09 00:59:57 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 00:59:57.390341 | orchestrator | 2026-03-09 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:00.434402 | orchestrator | 2026-03-09 01:00:00 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:00.434544 | orchestrator | 2026-03-09 01:00:00 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:00.438298 | orchestrator | 2026-03-09 01:00:00 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:00.438376 | orchestrator | 2026-03-09 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:03.476770 | orchestrator | 2026-03-09 01:00:03 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:03.477894 | orchestrator | 2026-03-09 01:00:03 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:03.478947 | orchestrator | 2026-03-09 01:00:03 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:03.480201 | orchestrator | 2026-03-09 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:06.528255 | orchestrator | 2026-03-09 01:00:06 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:06.531005 | orchestrator | 2026-03-09 01:00:06 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:06.534219 | orchestrator | 2026-03-09 01:00:06 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:06.534331 | orchestrator | 2026-03-09 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:09.589620 | orchestrator | 2026-03-09 01:00:09 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:09.592376 | orchestrator | 2026-03-09 01:00:09 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:09.594757 | orchestrator | 2026-03-09 01:00:09 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:09.594785 | orchestrator | 2026-03-09 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:12.643634 | orchestrator | 2026-03-09 01:00:12 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:12.649033 | orchestrator | 2026-03-09 01:00:12 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:12.651378 | orchestrator | 2026-03-09 01:00:12 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:12.651486 | orchestrator | 2026-03-09 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:15.697496 | orchestrator | 2026-03-09 01:00:15 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:15.698542 | orchestrator | 2026-03-09 01:00:15 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:15.699616 | orchestrator | 2026-03-09 01:00:15 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:15.699669 | orchestrator | 2026-03-09 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:18.741063 | orchestrator | 2026-03-09 01:00:18 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:18.744284 | orchestrator | 2026-03-09 01:00:18 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:18.746664 | orchestrator | 2026-03-09 01:00:18 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:18.746736 | orchestrator | 2026-03-09 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:21.803243 | orchestrator | 2026-03-09 01:00:21 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:21.805141 | orchestrator | 2026-03-09 01:00:21 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:21.806904 | orchestrator | 2026-03-09 01:00:21 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:21.806961 | orchestrator | 2026-03-09 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:24.864411 | orchestrator | 2026-03-09 01:00:24 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:24.865983 | orchestrator | 2026-03-09 01:00:24 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:24.867567 | orchestrator | 2026-03-09 01:00:24 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:24.867790 | orchestrator | 2026-03-09 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:27.909252 | orchestrator | 2026-03-09 01:00:27 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:27.910838 | orchestrator | 2026-03-09 01:00:27 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:27.914317 | orchestrator | 2026-03-09 01:00:27 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:27.914375 | orchestrator | 2026-03-09 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:30.954438 | orchestrator | 2026-03-09 01:00:30 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:30.955735 | orchestrator | 2026-03-09 01:00:30 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:30.956567 | orchestrator | 2026-03-09 01:00:30 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:30.956593 | orchestrator | 2026-03-09 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:34.009331 | orchestrator | 2026-03-09 01:00:34 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:34.012292 | orchestrator | 2026-03-09 01:00:34 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:34.015571 | orchestrator | 2026-03-09 01:00:34 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:34.015650 | orchestrator | 2026-03-09 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:37.061120 | orchestrator | 2026-03-09 01:00:37 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:37.065027 | orchestrator | 2026-03-09 01:00:37 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:37.067118 | orchestrator | 2026-03-09 01:00:37 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:37.067483 | orchestrator | 2026-03-09 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:40.108374 | orchestrator | 2026-03-09 01:00:40 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:40.110458 | orchestrator | 2026-03-09 01:00:40 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:40.113830 | orchestrator | 2026-03-09 01:00:40 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:40.113888 | orchestrator | 2026-03-09 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:43.158345 | orchestrator | 2026-03-09 01:00:43 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:43.158626 | orchestrator | 2026-03-09 01:00:43 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:43.159729 | orchestrator | 2026-03-09 01:00:43 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:43.159771 | orchestrator | 2026-03-09 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:46.212829 | orchestrator | 2026-03-09 01:00:46 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:46.216853 | orchestrator | 2026-03-09 01:00:46 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:46.218764 | orchestrator | 2026-03-09 01:00:46 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:46.219065 | orchestrator | 2026-03-09 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:49.261616 | orchestrator | 2026-03-09 01:00:49 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:49.264372 | orchestrator | 2026-03-09 01:00:49 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:49.266859 | orchestrator | 2026-03-09 01:00:49 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:49.266916 | orchestrator | 2026-03-09 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:52.313830 | orchestrator | 2026-03-09 01:00:52 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:52.316929 | orchestrator | 2026-03-09 01:00:52 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:52.319381 | orchestrator | 2026-03-09 01:00:52 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:52.319440 | orchestrator | 2026-03-09 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:55.373245 | orchestrator | 2026-03-09 01:00:55 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:55.376900 | orchestrator | 2026-03-09 01:00:55 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:55.378985 | orchestrator | 2026-03-09 01:00:55 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:55.379354 | orchestrator | 2026-03-09 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:58.420242 | orchestrator | 2026-03-09 01:00:58 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:00:58.421972 | orchestrator | 2026-03-09 01:00:58 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:00:58.423611 | orchestrator | 2026-03-09 01:00:58 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:00:58.423843 | orchestrator | 2026-03-09 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:01.479113 | orchestrator | 2026-03-09 01:01:01 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:01.481265 | orchestrator | 2026-03-09 01:01:01 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:01:01.483899 | orchestrator | 2026-03-09 01:01:01 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:01.483954 | orchestrator | 2026-03-09 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:04.527016 | orchestrator | 2026-03-09 01:01:04 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:04.528728 | orchestrator | 2026-03-09 01:01:04 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:01:04.530774 | orchestrator | 2026-03-09 01:01:04 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:04.531199 | orchestrator | 2026-03-09 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:07.569800 | orchestrator | 2026-03-09 01:01:07 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:07.571591 | orchestrator | 2026-03-09 01:01:07 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state STARTED 2026-03-09 01:01:07.574849 | orchestrator | 2026-03-09 01:01:07 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:07.574917 | orchestrator | 2026-03-09 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:10.634802 | orchestrator | 2026-03-09 01:01:10 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:10.636182 | orchestrator | 2026-03-09 01:01:10 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:10.639982 | orchestrator | 2026-03-09 01:01:10 | INFO  | Task 283232eb-8c62-43aa-9508-07eb265e2c3d is in state SUCCESS 2026-03-09 01:01:10.642299 | orchestrator | 2026-03-09 01:01:10.642362 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-09 01:01:10.642375 | orchestrator | 2.16.14 2026-03-09 01:01:10.642387 | orchestrator | 2026-03-09 01:01:10.642398 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-09 01:01:10.642409 | orchestrator | 2026-03-09 01:01:10.642420 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-09 01:01:10.642430 | orchestrator | Monday 09 March 2026 00:48:41 +0000 (0:00:00.923) 0:00:00.923 ********** 2026-03-09 01:01:10.642441 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.642453 | orchestrator | 2026-03-09 01:01:10.642463 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-09 01:01:10.642473 | orchestrator | Monday 09 March 2026 00:48:43 +0000 (0:00:01.434) 0:00:02.358 ********** 2026-03-09 01:01:10.642483 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.642493 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.642503 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.642513 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.642523 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.642533 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.642543 | orchestrator | 2026-03-09 01:01:10.642553 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-09 01:01:10.642563 | orchestrator | Monday 09 March 2026 00:48:45 +0000 (0:00:01.915) 0:00:04.274 ********** 2026-03-09 01:01:10.642573 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.642583 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.642593 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.642602 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.642612 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.642622 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.642658 | orchestrator | 2026-03-09 01:01:10.642707 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-09 01:01:10.642719 | orchestrator | Monday 09 March 2026 00:48:46 +0000 (0:00:01.432) 0:00:05.707 ********** 2026-03-09 01:01:10.642729 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.642738 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.642748 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.642758 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.642768 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.643286 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.643306 | orchestrator | 2026-03-09 01:01:10.643324 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-09 01:01:10.643342 | orchestrator | Monday 09 March 2026 00:48:48 +0000 (0:00:01.593) 0:00:07.300 ********** 2026-03-09 01:01:10.643359 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.643375 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.643392 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.643409 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.643424 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.643441 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.643458 | orchestrator | 2026-03-09 01:01:10.643475 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-09 01:01:10.643493 | orchestrator | Monday 09 March 2026 00:48:49 +0000 (0:00:01.564) 0:00:08.864 ********** 2026-03-09 01:01:10.643509 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.643525 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.643542 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.643557 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.643574 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.643591 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.643608 | orchestrator | 2026-03-09 01:01:10.643624 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-09 01:01:10.643640 | orchestrator | Monday 09 March 2026 00:48:50 +0000 (0:00:00.988) 0:00:09.853 ********** 2026-03-09 01:01:10.643657 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.643700 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.643718 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.643734 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.643751 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.643767 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.643784 | orchestrator | 2026-03-09 01:01:10.644364 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-09 01:01:10.644402 | orchestrator | Monday 09 March 2026 00:48:52 +0000 (0:00:01.581) 0:00:11.435 ********** 2026-03-09 01:01:10.644420 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.644438 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.644454 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.644470 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.644486 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.644502 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.644518 | orchestrator | 2026-03-09 01:01:10.644535 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-09 01:01:10.644550 | orchestrator | Monday 09 March 2026 00:48:53 +0000 (0:00:01.373) 0:00:12.808 ********** 2026-03-09 01:01:10.644566 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.644582 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.644599 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.644616 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.644631 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.644647 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.645549 | orchestrator | 2026-03-09 01:01:10.645594 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-09 01:01:10.645608 | orchestrator | Monday 09 March 2026 00:48:54 +0000 (0:00:01.272) 0:00:14.081 ********** 2026-03-09 01:01:10.645624 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 01:01:10.645696 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 01:01:10.645718 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 01:01:10.645734 | orchestrator | 2026-03-09 01:01:10.645768 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-09 01:01:10.645785 | orchestrator | Monday 09 March 2026 00:48:55 +0000 (0:00:01.056) 0:00:15.138 ********** 2026-03-09 01:01:10.645800 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.645817 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.645834 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.645913 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.645934 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.645950 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.645967 | orchestrator | 2026-03-09 01:01:10.646652 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-09 01:01:10.646745 | orchestrator | Monday 09 March 2026 00:48:59 +0000 (0:00:03.547) 0:00:18.685 ********** 2026-03-09 01:01:10.646765 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 01:01:10.646782 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 01:01:10.646799 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 01:01:10.646816 | orchestrator | 2026-03-09 01:01:10.646832 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-09 01:01:10.646849 | orchestrator | Monday 09 March 2026 00:49:01 +0000 (0:00:02.482) 0:00:21.169 ********** 2026-03-09 01:01:10.646866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-09 01:01:10.646883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-09 01:01:10.646900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-09 01:01:10.647448 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.647475 | orchestrator | 2026-03-09 01:01:10.647489 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-09 01:01:10.647503 | orchestrator | Monday 09 March 2026 00:49:02 +0000 (0:00:00.903) 0:00:22.072 ********** 2026-03-09 01:01:10.647519 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.647538 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.647552 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.647566 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.647581 | orchestrator | 2026-03-09 01:01:10.647594 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-09 01:01:10.647608 | orchestrator | Monday 09 March 2026 00:49:04 +0000 (0:00:01.415) 0:00:23.487 ********** 2026-03-09 01:01:10.647625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.647641 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.647706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.647722 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.647736 | orchestrator | 2026-03-09 01:01:10.647750 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-09 01:01:10.647763 | orchestrator | Monday 09 March 2026 00:49:05 +0000 (0:00:01.037) 0:00:24.525 ********** 2026-03-09 01:01:10.648206 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-09 00:49:00.023982', 'end': '2026-03-09 00:49:00.108292', 'delta': '0:00:00.084310', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.648232 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-09 00:49:00.753543', 'end': '2026-03-09 00:49:00.845782', 'delta': '0:00:00.092239', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.648241 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-09 00:49:01.452679', 'end': '2026-03-09 00:49:01.560964', 'delta': '0:00:00.108285', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.648250 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.648258 | orchestrator | 2026-03-09 01:01:10.648266 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-09 01:01:10.648274 | orchestrator | Monday 09 March 2026 00:49:05 +0000 (0:00:00.319) 0:00:24.844 ********** 2026-03-09 01:01:10.648283 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.648291 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.648299 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.648307 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.648315 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.648599 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.648621 | orchestrator | 2026-03-09 01:01:10.648635 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-09 01:01:10.648663 | orchestrator | Monday 09 March 2026 00:49:08 +0000 (0:00:02.625) 0:00:27.469 ********** 2026-03-09 01:01:10.648709 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:01:10.648723 | orchestrator | 2026-03-09 01:01:10.648736 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-09 01:01:10.648749 | orchestrator | Monday 09 March 2026 00:49:10 +0000 (0:00:01.945) 0:00:29.415 ********** 2026-03-09 01:01:10.648762 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.648775 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.648787 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.648800 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.648814 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.648827 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.648839 | orchestrator | 2026-03-09 01:01:10.648852 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-09 01:01:10.648866 | orchestrator | Monday 09 March 2026 00:49:12 +0000 (0:00:02.426) 0:00:31.841 ********** 2026-03-09 01:01:10.648879 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.648893 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.648906 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.648918 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.648931 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.648941 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.648949 | orchestrator | 2026-03-09 01:01:10.648958 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-09 01:01:10.648966 | orchestrator | Monday 09 March 2026 00:49:14 +0000 (0:00:02.212) 0:00:34.054 ********** 2026-03-09 01:01:10.648973 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.648981 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.648989 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.648997 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.649005 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.649013 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.649021 | orchestrator | 2026-03-09 01:01:10.649029 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-09 01:01:10.649037 | orchestrator | Monday 09 March 2026 00:49:18 +0000 (0:00:03.242) 0:00:37.297 ********** 2026-03-09 01:01:10.649045 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.649053 | orchestrator | 2026-03-09 01:01:10.649061 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-09 01:01:10.649069 | orchestrator | Monday 09 March 2026 00:49:18 +0000 (0:00:00.230) 0:00:37.527 ********** 2026-03-09 01:01:10.649077 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.649085 | orchestrator | 2026-03-09 01:01:10.649102 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-09 01:01:10.649110 | orchestrator | Monday 09 March 2026 00:49:18 +0000 (0:00:00.414) 0:00:37.942 ********** 2026-03-09 01:01:10.649118 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.649126 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.649134 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.649236 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.649249 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.649257 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.649265 | orchestrator | 2026-03-09 01:01:10.649273 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-09 01:01:10.649281 | orchestrator | Monday 09 March 2026 00:49:19 +0000 (0:00:01.207) 0:00:39.149 ********** 2026-03-09 01:01:10.649289 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.649297 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.649305 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.649313 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.649321 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.649339 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.649347 | orchestrator | 2026-03-09 01:01:10.649355 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-09 01:01:10.649363 | orchestrator | Monday 09 March 2026 00:49:21 +0000 (0:00:02.007) 0:00:41.156 ********** 2026-03-09 01:01:10.649371 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.649379 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.649387 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.649395 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.649402 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.649410 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.649418 | orchestrator | 2026-03-09 01:01:10.649426 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-09 01:01:10.649434 | orchestrator | Monday 09 March 2026 00:49:23 +0000 (0:00:01.440) 0:00:42.597 ********** 2026-03-09 01:01:10.649442 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.649450 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.649458 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.649466 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.649473 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.649481 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.649489 | orchestrator | 2026-03-09 01:01:10.649497 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-09 01:01:10.649505 | orchestrator | Monday 09 March 2026 00:49:25 +0000 (0:00:02.327) 0:00:44.925 ********** 2026-03-09 01:01:10.649513 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.649521 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.649529 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.649537 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.649544 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.649552 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.649560 | orchestrator | 2026-03-09 01:01:10.649568 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-09 01:01:10.649576 | orchestrator | Monday 09 March 2026 00:49:26 +0000 (0:00:01.341) 0:00:46.266 ********** 2026-03-09 01:01:10.649584 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.649592 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.649600 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.649608 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.649616 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.649624 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.649631 | orchestrator | 2026-03-09 01:01:10.649639 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-09 01:01:10.649647 | orchestrator | Monday 09 March 2026 00:49:29 +0000 (0:00:02.070) 0:00:48.336 ********** 2026-03-09 01:01:10.649736 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.649746 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.649754 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.649762 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.649771 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.649779 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.649787 | orchestrator | 2026-03-09 01:01:10.649795 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-09 01:01:10.649804 | orchestrator | Monday 09 March 2026 00:49:30 +0000 (0:00:01.355) 0:00:49.692 ********** 2026-03-09 01:01:10.649815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a76ca51e--4549--54be--bcb5--a2c49bca5f85-osd--block--a76ca51e--4549--54be--bcb5--a2c49bca5f85', 'dm-uuid-LVM-w3KmgfdCLRCz1nzP1ZpO9H9pHqJp1r7WcbHFA9REnlGsm5wfiRuHIIAZJeZFEBOr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.649832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--30c2fd4e--0770--5a21--8e5f--9ea8386abee3-osd--block--30c2fd4e--0770--5a21--8e5f--9ea8386abee3', 'dm-uuid-LVM-2DzBRdoHI7a6R3hiAm39d4nXHwL76disOJvxLFpTMn4O8Cnk33qSzCmskqV7mLMX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.649907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.649920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.649929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0-osd--block--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0', 'dm-uuid-LVM-Crn65bAtcJ8NY0QAXe6hc3ClXzBKgzu5c2fiklXOx2FAFa7GdHF2ubYcMum8p8wZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.649938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.649947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.649956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.649964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1060daf8--ac1b--51e4--8c2b--8176ae449cc2-osd--block--1060daf8--ac1b--51e4--8c2b--8176ae449cc2', 'dm-uuid-LVM-fcEfuB2607j6ZYoUmX15C7Lmw7ILBQhowckmumsYlkuISJLIZtrE8JLpZYi3Ufhx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.649978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.649989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part1', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part14', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part15', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part16', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.650217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd-osd--block--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd', 'dm-uuid-LVM-py0FfaQCrNAhEvJbHPwFiO3HcjwJiOciI5fsD9hd11KDxNNfPJkoZovcROKbAqBo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0-osd--block--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hZ2zsR-dpet-WtZx-YO63-Zyv2-SQcu-6wa4uF', 'scsi-0QEMU_QEMU_HARDDISK_fb37f328-fd68-494b-bcff-294494d86f6d', 'scsi-SQEMU_QEMU_HARDDISK_fb37f328-fd68-494b-bcff-294494d86f6d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.650241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bfced398--94c6--51d2--a38a--d9d8acf734fd-osd--block--bfced398--94c6--51d2--a38a--d9d8acf734fd', 'dm-uuid-LVM-H8lwa76xLUMSSuogPAeG6nzZ4hft20bqk0pAjtaLPc53vzwN0pGL74vNP6IJxLA6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1060daf8--ac1b--51e4--8c2b--8176ae449cc2-osd--block--1060daf8--ac1b--51e4--8c2b--8176ae449cc2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WV45Xj-T3Dy-wDPY-kBFk-cqa0-nBae-ixHoA9', 'scsi-0QEMU_QEMU_HARDDISK_771f98cb-74e3-479e-8ec9-00fdc11a8238', 'scsi-SQEMU_QEMU_HARDDISK_771f98cb-74e3-479e-8ec9-00fdc11a8238'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.650342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51b9e2da-28ed-40a7-8c18-598646420d16', 'scsi-SQEMU_QEMU_HARDDISK_51b9e2da-28ed-40a7-8c18-598646420d16'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.650393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part1', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part14', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part15', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part16', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.650476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.650503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a76ca51e--4549--54be--bcb5--a2c49bca5f85-osd--block--a76ca51e--4549--54be--bcb5--a2c49bca5f85'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-50MCkw-QrFW-3czy-Y4uM-IwOG-BDk8-HCbrtU', 'scsi-0QEMU_QEMU_HARDDISK_741bb6ef-88fa-4baa-bfac-ed82f0dadf29', 'scsi-SQEMU_QEMU_HARDDISK_741bb6ef-88fa-4baa-bfac-ed82f0dadf29'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.650524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--30c2fd4e--0770--5a21--8e5f--9ea8386abee3-osd--block--30c2fd4e--0770--5a21--8e5f--9ea8386abee3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C1x2z5-fK0E-NcTN-wBoz-sr5t-Wo21-SbAqpG', 'scsi-0QEMU_QEMU_HARDDISK_320449d2-61ff-46fc-8f0d-ef8de6be542f', 'scsi-SQEMU_QEMU_HARDDISK_320449d2-61ff-46fc-8f0d-ef8de6be542f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.650597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17d99fae-d184-430d-aac6-01476d40e112', 'scsi-SQEMU_QEMU_HARDDISK_17d99fae-d184-430d-aac6-01476d40e112'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.650623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.650638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6', 'scsi-SQEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.650866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.650886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850', 'scsi-SQEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part1', 'scsi-SQEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part14', 'scsi-SQEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part15', 'scsi-SQEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part16', 'scsi-SQEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.650897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.650980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part1', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part14', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part15', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part16', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.650993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd-osd--block--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7pJXzq-1pyI-wtRg-uBiv-Ufc4-mOUb-oEBe2k', 'scsi-0QEMU_QEMU_HARDDISK_bf4da7fe-59ae-42e8-92ff-fb55dbc42396', 'scsi-SQEMU_QEMU_HARDDISK_bf4da7fe-59ae-42e8-92ff-fb55dbc42396'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.651049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bfced398--94c6--51d2--a38a--d9d8acf734fd-osd--block--bfced398--94c6--51d2--a38a--d9d8acf734fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4GU76F-SXdF-ds4a-84RK-BRIp-1hBV-STWcsg', 'scsi-0QEMU_QEMU_HARDDISK_d616dde6-c913-49b8-b8ef-90f7cc767ff0', 'scsi-SQEMU_QEMU_HARDDISK_d616dde6-c913-49b8-b8ef-90f7cc767ff0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.651059 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.651067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad7d39e-c79f-49cf-9f83-32481f17a0bc', 'scsi-SQEMU_QEMU_HARDDISK_7ad7d39e-c79f-49cf-9f83-32481f17a0bc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.651074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.651089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.651096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.651103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.651110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.651117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.651181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.651192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.651199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:01:10.651207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be', 'scsi-SQEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part1', 'scsi-SQEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part14', 'scsi-SQEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part15', 'scsi-SQEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part16', 'scsi-SQEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.651220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.651227 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.651234 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.651241 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.651248 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.651304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:01:10.651315 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.651332 | orchestrator | 2026-03-09 01:01:10.651339 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-09 01:01:10.651347 | orchestrator | Monday 09 March 2026 00:49:36 +0000 (0:00:06.032) 0:00:55.725 ********** 2026-03-09 01:01:10.651355 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a76ca51e--4549--54be--bcb5--a2c49bca5f85-osd--block--a76ca51e--4549--54be--bcb5--a2c49bca5f85', 'dm-uuid-LVM-w3KmgfdCLRCz1nzP1ZpO9H9pHqJp1r7WcbHFA9REnlGsm5wfiRuHIIAZJeZFEBOr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651369 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--30c2fd4e--0770--5a21--8e5f--9ea8386abee3-osd--block--30c2fd4e--0770--5a21--8e5f--9ea8386abee3', 'dm-uuid-LVM-2DzBRdoHI7a6R3hiAm39d4nXHwL76disOJvxLFpTMn4O8Cnk33qSzCmskqV7mLMX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651376 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651384 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651391 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651448 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0-osd--block--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0', 'dm-uuid-LVM-Crn65bAtcJ8NY0QAXe6hc3ClXzBKgzu5c2fiklXOx2FAFa7GdHF2ubYcMum8p8wZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651463 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651470 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1060daf8--ac1b--51e4--8c2b--8176ae449cc2-osd--block--1060daf8--ac1b--51e4--8c2b--8176ae449cc2', 'dm-uuid-LVM-fcEfuB2607j6ZYoUmX15C7Lmw7ILBQhowckmumsYlkuISJLIZtrE8JLpZYi3Ufhx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651477 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651485 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651492 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651547 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651567 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651580 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651587 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651594 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651601 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651659 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part1', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part14', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part15', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part16', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651690 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd-osd--block--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd', 'dm-uuid-LVM-py0FfaQCrNAhEvJbHPwFiO3HcjwJiOciI5fsD9hd11KDxNNfPJkoZovcROKbAqBo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651699 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bfced398--94c6--51d2--a38a--d9d8acf734fd-osd--block--bfced398--94c6--51d2--a38a--d9d8acf734fd', 'dm-uuid-LVM-H8lwa76xLUMSSuogPAeG6nzZ4hft20bqk0pAjtaLPc53vzwN0pGL74vNP6IJxLA6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651706 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651762 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651778 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651785 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651802 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651816 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651827 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651882 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a76ca51e--4549--54be--bcb5--a2c49bca5f85-osd--block--a76ca51e--4549--54be--bcb5--a2c49bca5f85'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-50MCkw-QrFW-3czy-Y4uM-IwOG-BDk8-HCbrtU', 'scsi-0QEMU_QEMU_HARDDISK_741bb6ef-88fa-4baa-bfac-ed82f0dadf29', 'scsi-SQEMU_QEMU_HARDDISK_741bb6ef-88fa-4baa-bfac-ed82f0dadf29'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651899 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part1', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part14', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part15', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part16', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651907 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651964 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd-osd--block--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7pJXzq-1pyI-wtRg-uBiv-Ufc4-mOUb-oEBe2k', 'scsi-0QEMU_QEMU_HARDDISK_bf4da7fe-59ae-42e8-92ff-fb55dbc42396', 'scsi-SQEMU_QEMU_HARDDISK_bf4da7fe-59ae-42e8-92ff-fb55dbc42396'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651980 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--bfced398--94c6--51d2--a38a--d9d8acf734fd-osd--block--bfced398--94c6--51d2--a38a--d9d8acf734fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4GU76F-SXdF-ds4a-84RK-BRIp-1hBV-STWcsg', 'scsi-0QEMU_QEMU_HARDDISK_d616dde6-c913-49b8-b8ef-90f7cc767ff0', 'scsi-SQEMU_QEMU_HARDDISK_d616dde6-c913-49b8-b8ef-90f7cc767ff0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651987 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad7d39e-c79f-49cf-9f83-32481f17a0bc', 'scsi-SQEMU_QEMU_HARDDISK_7ad7d39e-c79f-49cf-9f83-32481f17a0bc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.651994 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652012 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652067 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--30c2fd4e--0770--5a21--8e5f--9ea8386abee3-osd--block--30c2fd4e--0770--5a21--8e5f--9ea8386abee3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C1x2z5-fK0E-NcTN-wBoz-sr5t-Wo21-SbAqpG', 'scsi-0QEMU_QEMU_HARDDISK_320449d2-61ff-46fc-8f0d-ef8de6be542f', 'scsi-SQEMU_QEMU_HARDDISK_320449d2-61ff-46fc-8f0d-ef8de6be542f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652085 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652093 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652101 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652108 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652164 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part1', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part14', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part15', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part16', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652181 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17d99fae-d184-430d-aac6-01476d40e112', 'scsi-SQEMU_QEMU_HARDDISK_17d99fae-d184-430d-aac6-01476d40e112'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652188 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652195 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652202 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652273 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652285 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652293 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850', 'scsi-SQEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part1', 'scsi-SQEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part14', 'scsi-SQEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part15', 'scsi-SQEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part16', 'scsi-SQEMU_QEMU_HARDDISK_9f2fd835-a7d9-47f6-b03f-7ff6492b6850-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652304 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652317 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.652370 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652380 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652387 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652395 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0-osd--block--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hZ2zsR-dpet-WtZx-YO63-Zyv2-SQcu-6wa4uF', 'scsi-0QEMU_QEMU_HARDDISK_fb37f328-fd68-494b-bcff-294494d86f6d', 'scsi-SQEMU_QEMU_HARDDISK_fb37f328-fd68-494b-bcff-294494d86f6d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652402 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652474 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652485 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652492 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652499 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652506 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652513 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1060daf8--ac1b--51e4--8c2b--8176ae449cc2-osd--block--1060daf8--ac1b--51e4--8c2b--8176ae449cc2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WV45Xj-T3Dy-wDPY-kBFk-cqa0-nBae-ixHoA9', 'scsi-0QEMU_QEMU_HARDDISK_771f98cb-74e3-479e-8ec9-00fdc11a8238', 'scsi-SQEMU_QEMU_HARDDISK_771f98cb-74e3-479e-8ec9-00fdc11a8238'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652576 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6', 'scsi-SQEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_3bd37f1a-45e9-4691-b1ea-c721d1b654c6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652589 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652596 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51b9e2da-28ed-40a7-8c18-598646420d16', 'scsi-SQEMU_QEMU_HARDDISK_51b9e2da-28ed-40a7-8c18-598646420d16'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652625 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652641 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.652775 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.652794 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.652806 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.652819 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652830 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652838 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652845 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652852 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652867 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652947 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652959 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652967 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be', 'scsi-SQEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part1', 'scsi-SQEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part14', 'scsi-SQEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part15', 'scsi-SQEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part16', 'scsi-SQEMU_QEMU_HARDDISK_524a18ac-4c70-47e5-adf9-4e22d62cf9be-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652984 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:01:10.652992 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.652999 | orchestrator | 2026-03-09 01:01:10.653052 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-09 01:01:10.653063 | orchestrator | Monday 09 March 2026 00:49:40 +0000 (0:00:04.389) 0:01:00.114 ********** 2026-03-09 01:01:10.653070 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.653077 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.653084 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.653091 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.653098 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.653105 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.653111 | orchestrator | 2026-03-09 01:01:10.653118 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-09 01:01:10.653125 | orchestrator | Monday 09 March 2026 00:49:43 +0000 (0:00:02.549) 0:01:02.664 ********** 2026-03-09 01:01:10.653132 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.653139 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.653146 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.653153 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.653160 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.653166 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.653173 | orchestrator | 2026-03-09 01:01:10.653180 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-09 01:01:10.653187 | orchestrator | Monday 09 March 2026 00:49:45 +0000 (0:00:01.648) 0:01:04.312 ********** 2026-03-09 01:01:10.653194 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.653201 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.653208 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.653215 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.653221 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.653228 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.653235 | orchestrator | 2026-03-09 01:01:10.653252 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-09 01:01:10.653259 | orchestrator | Monday 09 March 2026 00:49:47 +0000 (0:00:02.843) 0:01:07.156 ********** 2026-03-09 01:01:10.653265 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.653271 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.653277 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.653284 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.653291 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.653302 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.653308 | orchestrator | 2026-03-09 01:01:10.653315 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-09 01:01:10.653321 | orchestrator | Monday 09 March 2026 00:49:49 +0000 (0:00:01.344) 0:01:08.500 ********** 2026-03-09 01:01:10.653328 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.653334 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.653340 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.653346 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.653353 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.653359 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.653365 | orchestrator | 2026-03-09 01:01:10.653372 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-09 01:01:10.653378 | orchestrator | Monday 09 March 2026 00:49:51 +0000 (0:00:01.909) 0:01:10.410 ********** 2026-03-09 01:01:10.653385 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.653391 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.653397 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.653403 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.653410 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.653416 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.653422 | orchestrator | 2026-03-09 01:01:10.653429 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-09 01:01:10.653435 | orchestrator | Monday 09 March 2026 00:49:52 +0000 (0:00:01.741) 0:01:12.152 ********** 2026-03-09 01:01:10.653441 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-09 01:01:10.653448 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-09 01:01:10.653454 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-09 01:01:10.653460 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-09 01:01:10.653466 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-09 01:01:10.653473 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-09 01:01:10.653479 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-09 01:01:10.653485 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-09 01:01:10.653492 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-09 01:01:10.653498 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-09 01:01:10.653504 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-09 01:01:10.653511 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-09 01:01:10.653517 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-09 01:01:10.653523 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-09 01:01:10.653530 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-09 01:01:10.653536 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-09 01:01:10.653543 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-09 01:01:10.653549 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-09 01:01:10.653555 | orchestrator | 2026-03-09 01:01:10.653561 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-09 01:01:10.653568 | orchestrator | Monday 09 March 2026 00:49:59 +0000 (0:00:06.314) 0:01:18.467 ********** 2026-03-09 01:01:10.653574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-09 01:01:10.653581 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-09 01:01:10.653587 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-09 01:01:10.653593 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-09 01:01:10.653603 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-09 01:01:10.653610 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-09 01:01:10.653616 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.653623 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-09 01:01:10.653705 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-09 01:01:10.653715 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-09 01:01:10.653722 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.653729 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.653735 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 01:01:10.653741 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 01:01:10.653748 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 01:01:10.653754 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-09 01:01:10.653760 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.653766 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-09 01:01:10.653773 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-09 01:01:10.653779 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.653786 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-09 01:01:10.653792 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-09 01:01:10.653798 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-09 01:01:10.653805 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.653811 | orchestrator | 2026-03-09 01:01:10.653818 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-09 01:01:10.653824 | orchestrator | Monday 09 March 2026 00:50:01 +0000 (0:00:02.030) 0:01:20.497 ********** 2026-03-09 01:01:10.653830 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.653837 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.653843 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.653850 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.653857 | orchestrator | 2026-03-09 01:01:10.653863 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-09 01:01:10.653871 | orchestrator | Monday 09 March 2026 00:50:03 +0000 (0:00:01.789) 0:01:22.287 ********** 2026-03-09 01:01:10.653877 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.653884 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.653890 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.653896 | orchestrator | 2026-03-09 01:01:10.653903 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-09 01:01:10.653909 | orchestrator | Monday 09 March 2026 00:50:03 +0000 (0:00:00.911) 0:01:23.199 ********** 2026-03-09 01:01:10.653915 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.653921 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.653928 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.653934 | orchestrator | 2026-03-09 01:01:10.653941 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-09 01:01:10.653947 | orchestrator | Monday 09 March 2026 00:50:04 +0000 (0:00:00.908) 0:01:24.107 ********** 2026-03-09 01:01:10.653954 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.653960 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.653966 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.653973 | orchestrator | 2026-03-09 01:01:10.653979 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-09 01:01:10.653985 | orchestrator | Monday 09 March 2026 00:50:05 +0000 (0:00:00.842) 0:01:24.950 ********** 2026-03-09 01:01:10.653991 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.653998 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.654004 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.654010 | orchestrator | 2026-03-09 01:01:10.654076 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-09 01:01:10.654088 | orchestrator | Monday 09 March 2026 00:50:06 +0000 (0:00:00.574) 0:01:25.525 ********** 2026-03-09 01:01:10.654106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:01:10.654115 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:01:10.654126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:01:10.654136 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.654146 | orchestrator | 2026-03-09 01:01:10.654158 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-09 01:01:10.654168 | orchestrator | Monday 09 March 2026 00:50:06 +0000 (0:00:00.494) 0:01:26.020 ********** 2026-03-09 01:01:10.654178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:01:10.654189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:01:10.654197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:01:10.654204 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.654210 | orchestrator | 2026-03-09 01:01:10.654216 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-09 01:01:10.654222 | orchestrator | Monday 09 March 2026 00:50:07 +0000 (0:00:00.567) 0:01:26.587 ********** 2026-03-09 01:01:10.654229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:01:10.654235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:01:10.654241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:01:10.654248 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.654254 | orchestrator | 2026-03-09 01:01:10.654260 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-09 01:01:10.654267 | orchestrator | Monday 09 March 2026 00:50:07 +0000 (0:00:00.459) 0:01:27.047 ********** 2026-03-09 01:01:10.654273 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.654279 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.654286 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.654292 | orchestrator | 2026-03-09 01:01:10.654303 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-09 01:01:10.654309 | orchestrator | Monday 09 March 2026 00:50:08 +0000 (0:00:00.448) 0:01:27.496 ********** 2026-03-09 01:01:10.654317 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-09 01:01:10.654325 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-09 01:01:10.654362 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-09 01:01:10.654371 | orchestrator | 2026-03-09 01:01:10.654379 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-09 01:01:10.654387 | orchestrator | Monday 09 March 2026 00:50:09 +0000 (0:00:01.072) 0:01:28.568 ********** 2026-03-09 01:01:10.654394 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 01:01:10.654402 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 01:01:10.654409 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 01:01:10.654417 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-09 01:01:10.654425 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-09 01:01:10.654432 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-09 01:01:10.654440 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-09 01:01:10.654447 | orchestrator | 2026-03-09 01:01:10.654454 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-09 01:01:10.654462 | orchestrator | Monday 09 March 2026 00:50:10 +0000 (0:00:00.969) 0:01:29.538 ********** 2026-03-09 01:01:10.654469 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 01:01:10.654477 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 01:01:10.654484 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 01:01:10.654497 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-09 01:01:10.654505 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-09 01:01:10.654512 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-09 01:01:10.654520 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-09 01:01:10.654527 | orchestrator | 2026-03-09 01:01:10.654535 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 01:01:10.654543 | orchestrator | Monday 09 March 2026 00:50:12 +0000 (0:00:02.333) 0:01:31.871 ********** 2026-03-09 01:01:10.654551 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.654559 | orchestrator | 2026-03-09 01:01:10.654566 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 01:01:10.654574 | orchestrator | Monday 09 March 2026 00:50:14 +0000 (0:00:01.523) 0:01:33.395 ********** 2026-03-09 01:01:10.654581 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.654588 | orchestrator | 2026-03-09 01:01:10.654596 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 01:01:10.654603 | orchestrator | Monday 09 March 2026 00:50:15 +0000 (0:00:01.696) 0:01:35.092 ********** 2026-03-09 01:01:10.654609 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.654615 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.654621 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.654628 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.654634 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.654640 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.654646 | orchestrator | 2026-03-09 01:01:10.654652 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 01:01:10.654659 | orchestrator | Monday 09 March 2026 00:50:17 +0000 (0:00:01.946) 0:01:37.038 ********** 2026-03-09 01:01:10.654665 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.654711 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.654718 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.654724 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.654731 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.654737 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.654743 | orchestrator | 2026-03-09 01:01:10.654750 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 01:01:10.654756 | orchestrator | Monday 09 March 2026 00:50:19 +0000 (0:00:01.776) 0:01:38.815 ********** 2026-03-09 01:01:10.654762 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.654769 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.654775 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.654781 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.654788 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.654794 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.654800 | orchestrator | 2026-03-09 01:01:10.654806 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 01:01:10.654813 | orchestrator | Monday 09 March 2026 00:50:21 +0000 (0:00:01.604) 0:01:40.420 ********** 2026-03-09 01:01:10.654820 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.654826 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.654832 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.654839 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.654845 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.654851 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.654858 | orchestrator | 2026-03-09 01:01:10.654864 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 01:01:10.654874 | orchestrator | Monday 09 March 2026 00:50:22 +0000 (0:00:01.466) 0:01:41.886 ********** 2026-03-09 01:01:10.654889 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.654896 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.654902 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.654908 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.654915 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.654944 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.654952 | orchestrator | 2026-03-09 01:01:10.654958 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 01:01:10.654964 | orchestrator | Monday 09 March 2026 00:50:24 +0000 (0:00:02.082) 0:01:43.968 ********** 2026-03-09 01:01:10.654971 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.654977 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.654983 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.654989 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.654996 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.655002 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.655008 | orchestrator | 2026-03-09 01:01:10.655015 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 01:01:10.655021 | orchestrator | Monday 09 March 2026 00:50:25 +0000 (0:00:00.973) 0:01:44.942 ********** 2026-03-09 01:01:10.655027 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.655033 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.655040 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.655046 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.655052 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.655058 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.655065 | orchestrator | 2026-03-09 01:01:10.655071 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 01:01:10.655077 | orchestrator | Monday 09 March 2026 00:50:26 +0000 (0:00:01.113) 0:01:46.055 ********** 2026-03-09 01:01:10.655084 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.655090 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.655096 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.655103 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.655109 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.655115 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.655121 | orchestrator | 2026-03-09 01:01:10.655128 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 01:01:10.655135 | orchestrator | Monday 09 March 2026 00:50:27 +0000 (0:00:01.177) 0:01:47.233 ********** 2026-03-09 01:01:10.655141 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.655147 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.655153 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.655160 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.655166 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.655172 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.655178 | orchestrator | 2026-03-09 01:01:10.655185 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 01:01:10.655191 | orchestrator | Monday 09 March 2026 00:50:30 +0000 (0:00:02.740) 0:01:49.973 ********** 2026-03-09 01:01:10.655197 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.655204 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.655210 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.655216 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.655223 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.655229 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.655235 | orchestrator | 2026-03-09 01:01:10.655242 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 01:01:10.655248 | orchestrator | Monday 09 March 2026 00:50:31 +0000 (0:00:01.294) 0:01:51.268 ********** 2026-03-09 01:01:10.655254 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.655260 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.655267 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.655273 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.655283 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.655290 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.655296 | orchestrator | 2026-03-09 01:01:10.655302 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 01:01:10.655308 | orchestrator | Monday 09 March 2026 00:50:33 +0000 (0:00:01.556) 0:01:52.824 ********** 2026-03-09 01:01:10.655315 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.655321 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.655327 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.655333 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.655340 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.655346 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.655352 | orchestrator | 2026-03-09 01:01:10.655359 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 01:01:10.655365 | orchestrator | Monday 09 March 2026 00:50:34 +0000 (0:00:01.163) 0:01:53.987 ********** 2026-03-09 01:01:10.655371 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.655377 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.655384 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.655390 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.655396 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.655403 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.655409 | orchestrator | 2026-03-09 01:01:10.655415 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 01:01:10.655422 | orchestrator | Monday 09 March 2026 00:50:36 +0000 (0:00:01.642) 0:01:55.630 ********** 2026-03-09 01:01:10.655429 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.655435 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.655441 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.655447 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.655454 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.655460 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.655467 | orchestrator | 2026-03-09 01:01:10.655473 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 01:01:10.655479 | orchestrator | Monday 09 March 2026 00:50:37 +0000 (0:00:00.956) 0:01:56.586 ********** 2026-03-09 01:01:10.655485 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.655492 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.655498 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.655504 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.655511 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.655517 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.655523 | orchestrator | 2026-03-09 01:01:10.655529 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 01:01:10.655539 | orchestrator | Monday 09 March 2026 00:50:38 +0000 (0:00:01.627) 0:01:58.214 ********** 2026-03-09 01:01:10.655545 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.655552 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.655558 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.655564 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.655590 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.655598 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.655604 | orchestrator | 2026-03-09 01:01:10.655611 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 01:01:10.655617 | orchestrator | Monday 09 March 2026 00:50:39 +0000 (0:00:01.024) 0:01:59.238 ********** 2026-03-09 01:01:10.655624 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.655630 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.655636 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.655643 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.655649 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.655655 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.655661 | orchestrator | 2026-03-09 01:01:10.655680 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 01:01:10.655694 | orchestrator | Monday 09 March 2026 00:50:41 +0000 (0:00:01.571) 0:02:00.810 ********** 2026-03-09 01:01:10.655700 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.655707 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.655714 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.655720 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.655726 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.655732 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.655739 | orchestrator | 2026-03-09 01:01:10.655745 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 01:01:10.655751 | orchestrator | Monday 09 March 2026 00:50:42 +0000 (0:00:01.405) 0:02:02.215 ********** 2026-03-09 01:01:10.655758 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.655764 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.655770 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.655776 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.655782 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.655788 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.655795 | orchestrator | 2026-03-09 01:01:10.655801 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-09 01:01:10.655808 | orchestrator | Monday 09 March 2026 00:50:44 +0000 (0:00:01.877) 0:02:04.093 ********** 2026-03-09 01:01:10.655814 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.655821 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.655827 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.655833 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.655840 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.655846 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.655852 | orchestrator | 2026-03-09 01:01:10.655858 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-09 01:01:10.655864 | orchestrator | Monday 09 March 2026 00:50:46 +0000 (0:00:01.731) 0:02:05.824 ********** 2026-03-09 01:01:10.655871 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.655877 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.655883 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.655889 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.655895 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.655902 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.655908 | orchestrator | 2026-03-09 01:01:10.655914 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-09 01:01:10.655920 | orchestrator | Monday 09 March 2026 00:50:50 +0000 (0:00:03.952) 0:02:09.777 ********** 2026-03-09 01:01:10.655927 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.655933 | orchestrator | 2026-03-09 01:01:10.655939 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-09 01:01:10.655945 | orchestrator | Monday 09 March 2026 00:50:52 +0000 (0:00:01.557) 0:02:11.334 ********** 2026-03-09 01:01:10.655951 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.655958 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.655964 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.655970 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.655976 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.655982 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.655988 | orchestrator | 2026-03-09 01:01:10.655994 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-09 01:01:10.656001 | orchestrator | Monday 09 March 2026 00:50:52 +0000 (0:00:00.777) 0:02:12.112 ********** 2026-03-09 01:01:10.656007 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.656013 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.656019 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.656025 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.656031 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.656042 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.656049 | orchestrator | 2026-03-09 01:01:10.656055 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-09 01:01:10.656061 | orchestrator | Monday 09 March 2026 00:50:54 +0000 (0:00:01.173) 0:02:13.285 ********** 2026-03-09 01:01:10.656067 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 01:01:10.656074 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 01:01:10.656080 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 01:01:10.656086 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 01:01:10.656092 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 01:01:10.656099 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 01:01:10.656105 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 01:01:10.656115 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 01:01:10.656121 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 01:01:10.656127 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 01:01:10.656153 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 01:01:10.656161 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 01:01:10.656167 | orchestrator | 2026-03-09 01:01:10.656174 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-09 01:01:10.656180 | orchestrator | Monday 09 March 2026 00:50:56 +0000 (0:00:02.537) 0:02:15.822 ********** 2026-03-09 01:01:10.656186 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.656193 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.656199 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.656205 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.656212 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.656218 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.656224 | orchestrator | 2026-03-09 01:01:10.656231 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-09 01:01:10.656237 | orchestrator | Monday 09 March 2026 00:50:58 +0000 (0:00:01.503) 0:02:17.326 ********** 2026-03-09 01:01:10.656243 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.656249 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.656256 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.656262 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.656268 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.656275 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.656281 | orchestrator | 2026-03-09 01:01:10.656287 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-09 01:01:10.656294 | orchestrator | Monday 09 March 2026 00:50:58 +0000 (0:00:00.703) 0:02:18.029 ********** 2026-03-09 01:01:10.656300 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.656306 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.656313 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.656319 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.656325 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.656331 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.656338 | orchestrator | 2026-03-09 01:01:10.656344 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-09 01:01:10.656350 | orchestrator | Monday 09 March 2026 00:50:59 +0000 (0:00:01.081) 0:02:19.110 ********** 2026-03-09 01:01:10.656357 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.656363 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.656374 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.656380 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.656387 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.656393 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.656399 | orchestrator | 2026-03-09 01:01:10.656405 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-09 01:01:10.656412 | orchestrator | Monday 09 March 2026 00:51:00 +0000 (0:00:00.770) 0:02:19.881 ********** 2026-03-09 01:01:10.656419 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.656426 | orchestrator | 2026-03-09 01:01:10.656432 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-09 01:01:10.656438 | orchestrator | Monday 09 March 2026 00:51:02 +0000 (0:00:01.524) 0:02:21.405 ********** 2026-03-09 01:01:10.656444 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.656450 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.656457 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.656463 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.656469 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.656475 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.656482 | orchestrator | 2026-03-09 01:01:10.656488 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-09 01:01:10.656495 | orchestrator | Monday 09 March 2026 00:51:44 +0000 (0:00:42.216) 0:03:03.622 ********** 2026-03-09 01:01:10.656501 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 01:01:10.656507 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 01:01:10.656513 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 01:01:10.656520 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.656526 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 01:01:10.656532 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 01:01:10.656538 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 01:01:10.656545 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.656551 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 01:01:10.656557 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 01:01:10.656563 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 01:01:10.656569 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.656576 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 01:01:10.656582 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 01:01:10.656588 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 01:01:10.656595 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.656601 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 01:01:10.656611 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 01:01:10.656617 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 01:01:10.656624 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.656651 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 01:01:10.656660 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 01:01:10.656714 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 01:01:10.656728 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.656738 | orchestrator | 2026-03-09 01:01:10.656749 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-09 01:01:10.656767 | orchestrator | Monday 09 March 2026 00:51:45 +0000 (0:00:00.847) 0:03:04.469 ********** 2026-03-09 01:01:10.656778 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.656789 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.656799 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.656809 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.656817 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.656824 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.656830 | orchestrator | 2026-03-09 01:01:10.656836 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-09 01:01:10.656843 | orchestrator | Monday 09 March 2026 00:51:46 +0000 (0:00:00.971) 0:03:05.441 ********** 2026-03-09 01:01:10.656849 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.656855 | orchestrator | 2026-03-09 01:01:10.656861 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-09 01:01:10.656868 | orchestrator | Monday 09 March 2026 00:51:46 +0000 (0:00:00.173) 0:03:05.614 ********** 2026-03-09 01:01:10.656874 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.656880 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.656887 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.656893 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.656899 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.656906 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.656912 | orchestrator | 2026-03-09 01:01:10.656919 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-09 01:01:10.656925 | orchestrator | Monday 09 March 2026 00:51:47 +0000 (0:00:01.054) 0:03:06.668 ********** 2026-03-09 01:01:10.656932 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.656938 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.656944 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.656951 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.656957 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.656963 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.656970 | orchestrator | 2026-03-09 01:01:10.656976 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-09 01:01:10.656982 | orchestrator | Monday 09 March 2026 00:51:48 +0000 (0:00:00.986) 0:03:07.655 ********** 2026-03-09 01:01:10.656989 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.656995 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.657001 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.657008 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.657014 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.657020 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.657027 | orchestrator | 2026-03-09 01:01:10.657033 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-09 01:01:10.657040 | orchestrator | Monday 09 March 2026 00:51:49 +0000 (0:00:00.787) 0:03:08.443 ********** 2026-03-09 01:01:10.657046 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.657053 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.657059 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.657065 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.657072 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.657078 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.657084 | orchestrator | 2026-03-09 01:01:10.657090 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-09 01:01:10.657097 | orchestrator | Monday 09 March 2026 00:51:52 +0000 (0:00:03.621) 0:03:12.065 ********** 2026-03-09 01:01:10.657103 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.657109 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.657116 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.657122 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.657128 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.657134 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.657141 | orchestrator | 2026-03-09 01:01:10.657153 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-09 01:01:10.657159 | orchestrator | Monday 09 March 2026 00:51:53 +0000 (0:00:01.078) 0:03:13.143 ********** 2026-03-09 01:01:10.657166 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.657174 | orchestrator | 2026-03-09 01:01:10.657180 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-09 01:01:10.657187 | orchestrator | Monday 09 March 2026 00:51:55 +0000 (0:00:01.743) 0:03:14.886 ********** 2026-03-09 01:01:10.657193 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.657199 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.657206 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.657212 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.657219 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.657225 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.657231 | orchestrator | 2026-03-09 01:01:10.657237 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-09 01:01:10.657244 | orchestrator | Monday 09 March 2026 00:51:56 +0000 (0:00:01.134) 0:03:16.021 ********** 2026-03-09 01:01:10.657250 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.657256 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.657262 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.657267 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.657273 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.657278 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.657284 | orchestrator | 2026-03-09 01:01:10.657293 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-09 01:01:10.657299 | orchestrator | Monday 09 March 2026 00:51:57 +0000 (0:00:01.015) 0:03:17.036 ********** 2026-03-09 01:01:10.657304 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.657310 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.657340 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.657347 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.657353 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.657358 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.657364 | orchestrator | 2026-03-09 01:01:10.657369 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-09 01:01:10.657375 | orchestrator | Monday 09 March 2026 00:51:59 +0000 (0:00:01.484) 0:03:18.520 ********** 2026-03-09 01:01:10.657380 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.657386 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.657391 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.657397 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.657402 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.657408 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.657413 | orchestrator | 2026-03-09 01:01:10.657419 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-09 01:01:10.657424 | orchestrator | Monday 09 March 2026 00:52:00 +0000 (0:00:01.396) 0:03:19.916 ********** 2026-03-09 01:01:10.657430 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.657435 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.657441 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.657446 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.657452 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.657457 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.657463 | orchestrator | 2026-03-09 01:01:10.657468 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-09 01:01:10.657474 | orchestrator | Monday 09 March 2026 00:52:01 +0000 (0:00:01.300) 0:03:21.217 ********** 2026-03-09 01:01:10.657479 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.657485 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.657491 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.657501 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.657506 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.657512 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.657517 | orchestrator | 2026-03-09 01:01:10.657523 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-09 01:01:10.657528 | orchestrator | Monday 09 March 2026 00:52:03 +0000 (0:00:01.061) 0:03:22.279 ********** 2026-03-09 01:01:10.657534 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.657539 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.657544 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.657550 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.657555 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.657560 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.657566 | orchestrator | 2026-03-09 01:01:10.657571 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-09 01:01:10.657577 | orchestrator | Monday 09 March 2026 00:52:04 +0000 (0:00:01.357) 0:03:23.637 ********** 2026-03-09 01:01:10.657582 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.657588 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.657593 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.657599 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.657604 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.657610 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.657615 | orchestrator | 2026-03-09 01:01:10.657620 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-09 01:01:10.657626 | orchestrator | Monday 09 March 2026 00:52:05 +0000 (0:00:01.167) 0:03:24.804 ********** 2026-03-09 01:01:10.657631 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.657637 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.657642 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.657648 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.657653 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.657659 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.657664 | orchestrator | 2026-03-09 01:01:10.657681 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-09 01:01:10.657687 | orchestrator | Monday 09 March 2026 00:52:07 +0000 (0:00:01.852) 0:03:26.656 ********** 2026-03-09 01:01:10.657693 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.657698 | orchestrator | 2026-03-09 01:01:10.657704 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-09 01:01:10.657710 | orchestrator | Monday 09 March 2026 00:52:08 +0000 (0:00:01.574) 0:03:28.231 ********** 2026-03-09 01:01:10.657715 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-09 01:01:10.657721 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-09 01:01:10.657727 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-09 01:01:10.657732 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-09 01:01:10.657738 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-09 01:01:10.657743 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-09 01:01:10.657749 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-09 01:01:10.657754 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-09 01:01:10.657760 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-09 01:01:10.657766 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-09 01:01:10.657771 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-09 01:01:10.657777 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-09 01:01:10.657782 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-09 01:01:10.657788 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-09 01:01:10.657797 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-09 01:01:10.657805 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-09 01:01:10.657811 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-09 01:01:10.657816 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-09 01:01:10.657840 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-09 01:01:10.657846 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-09 01:01:10.657852 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-09 01:01:10.657859 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-09 01:01:10.657867 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-09 01:01:10.657877 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-09 01:01:10.657883 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-09 01:01:10.657889 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-09 01:01:10.657894 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-09 01:01:10.657899 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-09 01:01:10.657905 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-09 01:01:10.657910 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-09 01:01:10.657916 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-09 01:01:10.657921 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-09 01:01:10.657926 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-09 01:01:10.657932 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-09 01:01:10.657937 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-09 01:01:10.657942 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-09 01:01:10.657948 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-09 01:01:10.657953 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-09 01:01:10.657958 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-09 01:01:10.657964 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-09 01:01:10.657969 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-09 01:01:10.657975 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 01:01:10.657980 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-09 01:01:10.657986 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-09 01:01:10.657991 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-09 01:01:10.657996 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-09 01:01:10.658002 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 01:01:10.658007 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 01:01:10.658032 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-09 01:01:10.658039 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 01:01:10.658045 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-09 01:01:10.658050 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 01:01:10.658056 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 01:01:10.658061 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 01:01:10.658066 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-09 01:01:10.658072 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 01:01:10.658082 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 01:01:10.658087 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 01:01:10.658093 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 01:01:10.658098 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 01:01:10.658103 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 01:01:10.658109 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 01:01:10.658114 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 01:01:10.658119 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 01:01:10.658125 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 01:01:10.658130 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 01:01:10.658136 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 01:01:10.658141 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 01:01:10.658147 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 01:01:10.658152 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 01:01:10.658158 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 01:01:10.658163 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 01:01:10.658169 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 01:01:10.658177 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 01:01:10.658183 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-09 01:01:10.658188 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 01:01:10.658211 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 01:01:10.658218 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 01:01:10.658223 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 01:01:10.658229 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 01:01:10.658234 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-09 01:01:10.658240 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 01:01:10.658245 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-09 01:01:10.658251 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 01:01:10.658256 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-09 01:01:10.658262 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-09 01:01:10.658267 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-09 01:01:10.658273 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 01:01:10.658278 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 01:01:10.658284 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-09 01:01:10.658289 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-09 01:01:10.658295 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 01:01:10.658300 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-09 01:01:10.658306 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-09 01:01:10.658311 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-09 01:01:10.658317 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-09 01:01:10.658322 | orchestrator | 2026-03-09 01:01:10.658328 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-09 01:01:10.658338 | orchestrator | Monday 09 March 2026 00:52:16 +0000 (0:00:07.509) 0:03:35.740 ********** 2026-03-09 01:01:10.658344 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.658349 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.658355 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.658361 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.658367 | orchestrator | 2026-03-09 01:01:10.658372 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-09 01:01:10.658378 | orchestrator | Monday 09 March 2026 00:52:17 +0000 (0:00:00.860) 0:03:36.601 ********** 2026-03-09 01:01:10.658383 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-09 01:01:10.658389 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-09 01:01:10.658395 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-09 01:01:10.658400 | orchestrator | 2026-03-09 01:01:10.658406 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-09 01:01:10.658411 | orchestrator | Monday 09 March 2026 00:52:18 +0000 (0:00:01.110) 0:03:37.711 ********** 2026-03-09 01:01:10.658417 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-09 01:01:10.658422 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-09 01:01:10.658428 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-09 01:01:10.658433 | orchestrator | 2026-03-09 01:01:10.658439 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-09 01:01:10.658444 | orchestrator | Monday 09 March 2026 00:52:19 +0000 (0:00:01.232) 0:03:38.944 ********** 2026-03-09 01:01:10.658450 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.658455 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.658461 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.658467 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.658472 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.658478 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.658483 | orchestrator | 2026-03-09 01:01:10.658489 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-09 01:01:10.658494 | orchestrator | Monday 09 March 2026 00:52:20 +0000 (0:00:00.628) 0:03:39.573 ********** 2026-03-09 01:01:10.658500 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.658505 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.658510 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.658516 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.658521 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.658527 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.658533 | orchestrator | 2026-03-09 01:01:10.658538 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-09 01:01:10.658544 | orchestrator | Monday 09 March 2026 00:52:21 +0000 (0:00:01.200) 0:03:40.773 ********** 2026-03-09 01:01:10.658549 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.658558 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.658564 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.658569 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.658575 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.658580 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.658585 | orchestrator | 2026-03-09 01:01:10.658607 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-09 01:01:10.658620 | orchestrator | Monday 09 March 2026 00:52:22 +0000 (0:00:00.619) 0:03:41.393 ********** 2026-03-09 01:01:10.658626 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.658631 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.658637 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.658642 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.658648 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.658653 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.658658 | orchestrator | 2026-03-09 01:01:10.658664 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-09 01:01:10.658682 | orchestrator | Monday 09 March 2026 00:52:23 +0000 (0:00:00.994) 0:03:42.388 ********** 2026-03-09 01:01:10.658688 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.658693 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.658699 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.658704 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.658709 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.658715 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.658720 | orchestrator | 2026-03-09 01:01:10.658726 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-09 01:01:10.658731 | orchestrator | Monday 09 March 2026 00:52:23 +0000 (0:00:00.745) 0:03:43.133 ********** 2026-03-09 01:01:10.658737 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.658742 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.658748 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.658753 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.658759 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.658764 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.658770 | orchestrator | 2026-03-09 01:01:10.658775 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-09 01:01:10.658781 | orchestrator | Monday 09 March 2026 00:52:24 +0000 (0:00:00.825) 0:03:43.958 ********** 2026-03-09 01:01:10.658786 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.658791 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.658797 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.658802 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.658807 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.658813 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.658818 | orchestrator | 2026-03-09 01:01:10.658824 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-09 01:01:10.658829 | orchestrator | Monday 09 March 2026 00:52:25 +0000 (0:00:00.633) 0:03:44.591 ********** 2026-03-09 01:01:10.658835 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.658840 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.658846 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.658851 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.658857 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.658862 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.658867 | orchestrator | 2026-03-09 01:01:10.658873 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-09 01:01:10.658878 | orchestrator | Monday 09 March 2026 00:52:26 +0000 (0:00:01.024) 0:03:45.616 ********** 2026-03-09 01:01:10.658884 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.658889 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.658895 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.658901 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.658906 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.658912 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.658917 | orchestrator | 2026-03-09 01:01:10.658923 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-09 01:01:10.658928 | orchestrator | Monday 09 March 2026 00:52:30 +0000 (0:00:03.887) 0:03:49.504 ********** 2026-03-09 01:01:10.658937 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.658943 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.658949 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.658954 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.658960 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.658965 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.658971 | orchestrator | 2026-03-09 01:01:10.658976 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-09 01:01:10.658982 | orchestrator | Monday 09 March 2026 00:52:31 +0000 (0:00:01.192) 0:03:50.697 ********** 2026-03-09 01:01:10.658987 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.658993 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.658998 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.659004 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.659009 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.659015 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.659020 | orchestrator | 2026-03-09 01:01:10.659026 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-09 01:01:10.659031 | orchestrator | Monday 09 March 2026 00:52:32 +0000 (0:00:00.986) 0:03:51.683 ********** 2026-03-09 01:01:10.659037 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.659042 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.659048 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.659054 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.659059 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.659064 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.659070 | orchestrator | 2026-03-09 01:01:10.659075 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-09 01:01:10.659081 | orchestrator | Monday 09 March 2026 00:52:34 +0000 (0:00:02.112) 0:03:53.796 ********** 2026-03-09 01:01:10.659087 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-09 01:01:10.659095 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-09 01:01:10.659101 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-09 01:01:10.659107 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.659130 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.659136 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.659142 | orchestrator | 2026-03-09 01:01:10.659148 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-09 01:01:10.659153 | orchestrator | Monday 09 March 2026 00:52:35 +0000 (0:00:01.038) 0:03:54.834 ********** 2026-03-09 01:01:10.659160 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-09 01:01:10.659167 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-09 01:01:10.659174 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-09 01:01:10.659180 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-09 01:01:10.659191 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.659197 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-09 01:01:10.659202 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-09 01:01:10.659208 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.659213 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.659219 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.659224 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.659230 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.659235 | orchestrator | 2026-03-09 01:01:10.659241 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-09 01:01:10.659247 | orchestrator | Monday 09 March 2026 00:52:37 +0000 (0:00:01.728) 0:03:56.562 ********** 2026-03-09 01:01:10.659252 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.659258 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.659263 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.659268 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.659274 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.659279 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.659284 | orchestrator | 2026-03-09 01:01:10.659290 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-09 01:01:10.659295 | orchestrator | Monday 09 March 2026 00:52:37 +0000 (0:00:00.603) 0:03:57.166 ********** 2026-03-09 01:01:10.659301 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.659307 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.659312 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.659317 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.659323 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.659328 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.659334 | orchestrator | 2026-03-09 01:01:10.659339 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-09 01:01:10.659345 | orchestrator | Monday 09 March 2026 00:52:38 +0000 (0:00:00.838) 0:03:58.005 ********** 2026-03-09 01:01:10.659350 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.659356 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.659361 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.659367 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.659372 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.659377 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.659383 | orchestrator | 2026-03-09 01:01:10.659388 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-09 01:01:10.659394 | orchestrator | Monday 09 March 2026 00:52:39 +0000 (0:00:01.031) 0:03:59.037 ********** 2026-03-09 01:01:10.659399 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.659405 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.659413 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.659419 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.659424 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.659430 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.659435 | orchestrator | 2026-03-09 01:01:10.659441 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-09 01:01:10.659463 | orchestrator | Monday 09 March 2026 00:52:41 +0000 (0:00:01.471) 0:04:00.508 ********** 2026-03-09 01:01:10.659473 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.659478 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.659483 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.659489 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.659494 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.659499 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.659505 | orchestrator | 2026-03-09 01:01:10.659510 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-09 01:01:10.659516 | orchestrator | Monday 09 March 2026 00:52:42 +0000 (0:00:00.893) 0:04:01.402 ********** 2026-03-09 01:01:10.659521 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.659527 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.659532 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.659538 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.659543 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.659549 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.659554 | orchestrator | 2026-03-09 01:01:10.659560 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-09 01:01:10.659565 | orchestrator | Monday 09 March 2026 00:52:44 +0000 (0:00:01.954) 0:04:03.356 ********** 2026-03-09 01:01:10.659571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:01:10.659576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:01:10.659582 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:01:10.659587 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.659593 | orchestrator | 2026-03-09 01:01:10.659598 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-09 01:01:10.659604 | orchestrator | Monday 09 March 2026 00:52:44 +0000 (0:00:00.465) 0:04:03.822 ********** 2026-03-09 01:01:10.659609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:01:10.659614 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:01:10.659620 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:01:10.659625 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.659631 | orchestrator | 2026-03-09 01:01:10.659637 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-09 01:01:10.659642 | orchestrator | Monday 09 March 2026 00:52:45 +0000 (0:00:00.468) 0:04:04.291 ********** 2026-03-09 01:01:10.659648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:01:10.659653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:01:10.659659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:01:10.659664 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.659680 | orchestrator | 2026-03-09 01:01:10.659685 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-09 01:01:10.659691 | orchestrator | Monday 09 March 2026 00:52:45 +0000 (0:00:00.634) 0:04:04.926 ********** 2026-03-09 01:01:10.659696 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.659702 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.659707 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.659713 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.659718 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.659724 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.659729 | orchestrator | 2026-03-09 01:01:10.659735 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-09 01:01:10.659740 | orchestrator | Monday 09 March 2026 00:52:46 +0000 (0:00:00.864) 0:04:05.791 ********** 2026-03-09 01:01:10.659746 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-09 01:01:10.659751 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-09 01:01:10.659757 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-09 01:01:10.659762 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-09 01:01:10.659772 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-09 01:01:10.659777 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.659783 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.659788 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-09 01:01:10.659794 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.659799 | orchestrator | 2026-03-09 01:01:10.659804 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-09 01:01:10.659810 | orchestrator | Monday 09 March 2026 00:52:49 +0000 (0:00:02.552) 0:04:08.343 ********** 2026-03-09 01:01:10.659815 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.659821 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.659826 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.659832 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.659837 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.659843 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.659848 | orchestrator | 2026-03-09 01:01:10.659854 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 01:01:10.659860 | orchestrator | Monday 09 March 2026 00:52:52 +0000 (0:00:03.231) 0:04:11.575 ********** 2026-03-09 01:01:10.659865 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.659871 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.659876 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.659881 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.659887 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.659892 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.659898 | orchestrator | 2026-03-09 01:01:10.659904 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-09 01:01:10.659909 | orchestrator | Monday 09 March 2026 00:52:54 +0000 (0:00:01.726) 0:04:13.302 ********** 2026-03-09 01:01:10.659915 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.659920 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.659925 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.659931 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.659937 | orchestrator | 2026-03-09 01:01:10.659942 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-09 01:01:10.659965 | orchestrator | Monday 09 March 2026 00:52:55 +0000 (0:00:00.976) 0:04:14.278 ********** 2026-03-09 01:01:10.659972 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.659978 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.659983 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.659989 | orchestrator | 2026-03-09 01:01:10.659994 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-09 01:01:10.660000 | orchestrator | Monday 09 March 2026 00:52:55 +0000 (0:00:00.417) 0:04:14.696 ********** 2026-03-09 01:01:10.660005 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.660011 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.660016 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.660022 | orchestrator | 2026-03-09 01:01:10.660027 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-09 01:01:10.660033 | orchestrator | Monday 09 March 2026 00:52:56 +0000 (0:00:01.410) 0:04:16.106 ********** 2026-03-09 01:01:10.660038 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 01:01:10.660044 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 01:01:10.660049 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 01:01:10.660055 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.660060 | orchestrator | 2026-03-09 01:01:10.660066 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-09 01:01:10.660071 | orchestrator | Monday 09 March 2026 00:52:57 +0000 (0:00:00.620) 0:04:16.727 ********** 2026-03-09 01:01:10.660077 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.660083 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.660093 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.660098 | orchestrator | 2026-03-09 01:01:10.660104 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-09 01:01:10.660109 | orchestrator | Monday 09 March 2026 00:52:57 +0000 (0:00:00.351) 0:04:17.078 ********** 2026-03-09 01:01:10.660115 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.660120 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.660126 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.660132 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.660137 | orchestrator | 2026-03-09 01:01:10.660143 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-09 01:01:10.660148 | orchestrator | Monday 09 March 2026 00:52:59 +0000 (0:00:01.330) 0:04:18.409 ********** 2026-03-09 01:01:10.660154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:01:10.660160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:01:10.660165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:01:10.660171 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660176 | orchestrator | 2026-03-09 01:01:10.660182 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-09 01:01:10.660187 | orchestrator | Monday 09 March 2026 00:52:59 +0000 (0:00:00.574) 0:04:18.984 ********** 2026-03-09 01:01:10.660193 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660198 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.660204 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.660209 | orchestrator | 2026-03-09 01:01:10.660215 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-09 01:01:10.660220 | orchestrator | Monday 09 March 2026 00:53:00 +0000 (0:00:00.492) 0:04:19.476 ********** 2026-03-09 01:01:10.660226 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660232 | orchestrator | 2026-03-09 01:01:10.660237 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-09 01:01:10.660242 | orchestrator | Monday 09 March 2026 00:53:00 +0000 (0:00:00.443) 0:04:19.920 ********** 2026-03-09 01:01:10.660248 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660254 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.660259 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.660265 | orchestrator | 2026-03-09 01:01:10.660270 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-09 01:01:10.660275 | orchestrator | Monday 09 March 2026 00:53:01 +0000 (0:00:00.451) 0:04:20.371 ********** 2026-03-09 01:01:10.660281 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660287 | orchestrator | 2026-03-09 01:01:10.660292 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-09 01:01:10.660298 | orchestrator | Monday 09 March 2026 00:53:01 +0000 (0:00:00.288) 0:04:20.659 ********** 2026-03-09 01:01:10.660303 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660308 | orchestrator | 2026-03-09 01:01:10.660314 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-09 01:01:10.660320 | orchestrator | Monday 09 March 2026 00:53:01 +0000 (0:00:00.234) 0:04:20.894 ********** 2026-03-09 01:01:10.660325 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660330 | orchestrator | 2026-03-09 01:01:10.660336 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-09 01:01:10.660341 | orchestrator | Monday 09 March 2026 00:53:01 +0000 (0:00:00.137) 0:04:21.032 ********** 2026-03-09 01:01:10.660347 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660352 | orchestrator | 2026-03-09 01:01:10.660358 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-09 01:01:10.660363 | orchestrator | Monday 09 March 2026 00:53:02 +0000 (0:00:00.934) 0:04:21.966 ********** 2026-03-09 01:01:10.660369 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660374 | orchestrator | 2026-03-09 01:01:10.660407 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-09 01:01:10.660413 | orchestrator | Monday 09 March 2026 00:53:02 +0000 (0:00:00.295) 0:04:22.261 ********** 2026-03-09 01:01:10.660419 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:01:10.660427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:01:10.660432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:01:10.660438 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660444 | orchestrator | 2026-03-09 01:01:10.660449 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-09 01:01:10.660473 | orchestrator | Monday 09 March 2026 00:53:03 +0000 (0:00:00.493) 0:04:22.755 ********** 2026-03-09 01:01:10.660480 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660486 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.660491 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.660497 | orchestrator | 2026-03-09 01:01:10.660502 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-09 01:01:10.660508 | orchestrator | Monday 09 March 2026 00:53:03 +0000 (0:00:00.390) 0:04:23.146 ********** 2026-03-09 01:01:10.660513 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660519 | orchestrator | 2026-03-09 01:01:10.660524 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-09 01:01:10.660530 | orchestrator | Monday 09 March 2026 00:53:04 +0000 (0:00:00.227) 0:04:23.373 ********** 2026-03-09 01:01:10.660536 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660541 | orchestrator | 2026-03-09 01:01:10.660547 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-09 01:01:10.660552 | orchestrator | Monday 09 March 2026 00:53:04 +0000 (0:00:00.261) 0:04:23.634 ********** 2026-03-09 01:01:10.660558 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.660563 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.660568 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.660574 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.660579 | orchestrator | 2026-03-09 01:01:10.660585 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-09 01:01:10.660591 | orchestrator | Monday 09 March 2026 00:53:05 +0000 (0:00:01.237) 0:04:24.872 ********** 2026-03-09 01:01:10.660596 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.660602 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.660607 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.660612 | orchestrator | 2026-03-09 01:01:10.660618 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-09 01:01:10.660623 | orchestrator | Monday 09 March 2026 00:53:05 +0000 (0:00:00.339) 0:04:25.211 ********** 2026-03-09 01:01:10.660629 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.660634 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.660640 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.660645 | orchestrator | 2026-03-09 01:01:10.660651 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-09 01:01:10.660656 | orchestrator | Monday 09 March 2026 00:53:07 +0000 (0:00:01.278) 0:04:26.490 ********** 2026-03-09 01:01:10.660662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:01:10.660667 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:01:10.660706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:01:10.660712 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660717 | orchestrator | 2026-03-09 01:01:10.660723 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-09 01:01:10.660729 | orchestrator | Monday 09 March 2026 00:53:08 +0000 (0:00:01.085) 0:04:27.575 ********** 2026-03-09 01:01:10.660734 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.660740 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.660751 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.660756 | orchestrator | 2026-03-09 01:01:10.660762 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-09 01:01:10.660767 | orchestrator | Monday 09 March 2026 00:53:09 +0000 (0:00:00.705) 0:04:28.281 ********** 2026-03-09 01:01:10.660773 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.660778 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.660784 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.660789 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.660795 | orchestrator | 2026-03-09 01:01:10.660800 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-09 01:01:10.660806 | orchestrator | Monday 09 March 2026 00:53:10 +0000 (0:00:01.140) 0:04:29.422 ********** 2026-03-09 01:01:10.660811 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.660817 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.660822 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.660827 | orchestrator | 2026-03-09 01:01:10.660833 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-09 01:01:10.660838 | orchestrator | Monday 09 March 2026 00:53:10 +0000 (0:00:00.700) 0:04:30.123 ********** 2026-03-09 01:01:10.660844 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.660849 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.660855 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.660860 | orchestrator | 2026-03-09 01:01:10.660866 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-09 01:01:10.660872 | orchestrator | Monday 09 March 2026 00:53:12 +0000 (0:00:01.304) 0:04:31.427 ********** 2026-03-09 01:01:10.660877 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:01:10.660883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:01:10.660888 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:01:10.660894 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660899 | orchestrator | 2026-03-09 01:01:10.660904 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-09 01:01:10.660910 | orchestrator | Monday 09 March 2026 00:53:12 +0000 (0:00:00.660) 0:04:32.088 ********** 2026-03-09 01:01:10.660915 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.660921 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.660926 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.660932 | orchestrator | 2026-03-09 01:01:10.660937 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-09 01:01:10.660946 | orchestrator | Monday 09 March 2026 00:53:13 +0000 (0:00:00.407) 0:04:32.496 ********** 2026-03-09 01:01:10.660951 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.660957 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.660962 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.660968 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.660973 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.660999 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.661006 | orchestrator | 2026-03-09 01:01:10.661011 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-09 01:01:10.661017 | orchestrator | Monday 09 March 2026 00:53:14 +0000 (0:00:00.958) 0:04:33.455 ********** 2026-03-09 01:01:10.661022 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.661028 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.661033 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.661038 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.661043 | orchestrator | 2026-03-09 01:01:10.661048 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-09 01:01:10.661053 | orchestrator | Monday 09 March 2026 00:53:15 +0000 (0:00:00.955) 0:04:34.410 ********** 2026-03-09 01:01:10.661062 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.661067 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.661071 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.661076 | orchestrator | 2026-03-09 01:01:10.661081 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-09 01:01:10.661086 | orchestrator | Monday 09 March 2026 00:53:15 +0000 (0:00:00.825) 0:04:35.235 ********** 2026-03-09 01:01:10.661091 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.661096 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.661100 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.661105 | orchestrator | 2026-03-09 01:01:10.661110 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-09 01:01:10.661115 | orchestrator | Monday 09 March 2026 00:53:17 +0000 (0:00:01.651) 0:04:36.886 ********** 2026-03-09 01:01:10.661120 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 01:01:10.661125 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 01:01:10.661130 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 01:01:10.661135 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.661140 | orchestrator | 2026-03-09 01:01:10.661145 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-09 01:01:10.661149 | orchestrator | Monday 09 March 2026 00:53:18 +0000 (0:00:00.750) 0:04:37.637 ********** 2026-03-09 01:01:10.661154 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.661159 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.661164 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.661169 | orchestrator | 2026-03-09 01:01:10.661174 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-09 01:01:10.661179 | orchestrator | 2026-03-09 01:01:10.661184 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 01:01:10.661189 | orchestrator | Monday 09 March 2026 00:53:19 +0000 (0:00:00.644) 0:04:38.281 ********** 2026-03-09 01:01:10.661194 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.661199 | orchestrator | 2026-03-09 01:01:10.661204 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 01:01:10.661208 | orchestrator | Monday 09 March 2026 00:53:20 +0000 (0:00:01.046) 0:04:39.328 ********** 2026-03-09 01:01:10.661214 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.661218 | orchestrator | 2026-03-09 01:01:10.661223 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 01:01:10.661228 | orchestrator | Monday 09 March 2026 00:53:20 +0000 (0:00:00.569) 0:04:39.897 ********** 2026-03-09 01:01:10.661233 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.661238 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.661243 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.661248 | orchestrator | 2026-03-09 01:01:10.661252 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 01:01:10.661257 | orchestrator | Monday 09 March 2026 00:53:21 +0000 (0:00:01.197) 0:04:41.095 ********** 2026-03-09 01:01:10.661262 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.661267 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.661272 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.661277 | orchestrator | 2026-03-09 01:01:10.661281 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 01:01:10.661286 | orchestrator | Monday 09 March 2026 00:53:22 +0000 (0:00:00.390) 0:04:41.485 ********** 2026-03-09 01:01:10.661291 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.661296 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.661301 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.661306 | orchestrator | 2026-03-09 01:01:10.661311 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 01:01:10.661320 | orchestrator | Monday 09 March 2026 00:53:22 +0000 (0:00:00.434) 0:04:41.920 ********** 2026-03-09 01:01:10.661324 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.661329 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.661334 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.661339 | orchestrator | 2026-03-09 01:01:10.661344 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 01:01:10.661349 | orchestrator | Monday 09 March 2026 00:53:23 +0000 (0:00:00.396) 0:04:42.317 ********** 2026-03-09 01:01:10.661354 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.661359 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.661363 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.661368 | orchestrator | 2026-03-09 01:01:10.661373 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 01:01:10.661378 | orchestrator | Monday 09 March 2026 00:53:24 +0000 (0:00:01.152) 0:04:43.469 ********** 2026-03-09 01:01:10.661383 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.661392 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.661397 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.661402 | orchestrator | 2026-03-09 01:01:10.661407 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 01:01:10.661412 | orchestrator | Monday 09 March 2026 00:53:24 +0000 (0:00:00.322) 0:04:43.791 ********** 2026-03-09 01:01:10.661432 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.661438 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.661443 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.661448 | orchestrator | 2026-03-09 01:01:10.661453 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 01:01:10.661458 | orchestrator | Monday 09 March 2026 00:53:24 +0000 (0:00:00.295) 0:04:44.087 ********** 2026-03-09 01:01:10.661462 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.661467 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.661472 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.661477 | orchestrator | 2026-03-09 01:01:10.661482 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 01:01:10.661487 | orchestrator | Monday 09 March 2026 00:53:25 +0000 (0:00:00.800) 0:04:44.887 ********** 2026-03-09 01:01:10.661492 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.661497 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.661502 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.661506 | orchestrator | 2026-03-09 01:01:10.661511 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 01:01:10.661516 | orchestrator | Monday 09 March 2026 00:53:26 +0000 (0:00:01.029) 0:04:45.917 ********** 2026-03-09 01:01:10.661521 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.661526 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.661531 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.661536 | orchestrator | 2026-03-09 01:01:10.661540 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 01:01:10.661545 | orchestrator | Monday 09 March 2026 00:53:27 +0000 (0:00:00.364) 0:04:46.282 ********** 2026-03-09 01:01:10.661550 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.661555 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.661560 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.661565 | orchestrator | 2026-03-09 01:01:10.661570 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 01:01:10.661575 | orchestrator | Monday 09 March 2026 00:53:27 +0000 (0:00:00.342) 0:04:46.624 ********** 2026-03-09 01:01:10.661579 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.661584 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.661589 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.661594 | orchestrator | 2026-03-09 01:01:10.661599 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 01:01:10.661604 | orchestrator | Monday 09 March 2026 00:53:27 +0000 (0:00:00.312) 0:04:46.937 ********** 2026-03-09 01:01:10.661613 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.661618 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.661622 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.661627 | orchestrator | 2026-03-09 01:01:10.661632 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 01:01:10.661637 | orchestrator | Monday 09 March 2026 00:53:27 +0000 (0:00:00.290) 0:04:47.227 ********** 2026-03-09 01:01:10.661642 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.661647 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.661651 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.661656 | orchestrator | 2026-03-09 01:01:10.661661 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 01:01:10.661666 | orchestrator | Monday 09 March 2026 00:53:28 +0000 (0:00:00.520) 0:04:47.748 ********** 2026-03-09 01:01:10.661736 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.661741 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.661746 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.661751 | orchestrator | 2026-03-09 01:01:10.661756 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 01:01:10.661761 | orchestrator | Monday 09 March 2026 00:53:28 +0000 (0:00:00.317) 0:04:48.066 ********** 2026-03-09 01:01:10.661766 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.661771 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.661776 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.661781 | orchestrator | 2026-03-09 01:01:10.661786 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 01:01:10.661791 | orchestrator | Monday 09 March 2026 00:53:29 +0000 (0:00:00.347) 0:04:48.413 ********** 2026-03-09 01:01:10.661796 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.661801 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.661806 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.661811 | orchestrator | 2026-03-09 01:01:10.661816 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 01:01:10.661821 | orchestrator | Monday 09 March 2026 00:53:29 +0000 (0:00:00.364) 0:04:48.778 ********** 2026-03-09 01:01:10.661826 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.661831 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.661836 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.661841 | orchestrator | 2026-03-09 01:01:10.661845 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 01:01:10.661850 | orchestrator | Monday 09 March 2026 00:53:30 +0000 (0:00:00.569) 0:04:49.347 ********** 2026-03-09 01:01:10.661855 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.661860 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.661865 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.661870 | orchestrator | 2026-03-09 01:01:10.661875 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-09 01:01:10.661880 | orchestrator | Monday 09 March 2026 00:53:30 +0000 (0:00:00.669) 0:04:50.016 ********** 2026-03-09 01:01:10.661885 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.661890 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.661894 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.661899 | orchestrator | 2026-03-09 01:01:10.661904 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-09 01:01:10.661909 | orchestrator | Monday 09 March 2026 00:53:31 +0000 (0:00:00.350) 0:04:50.367 ********** 2026-03-09 01:01:10.661914 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.661919 | orchestrator | 2026-03-09 01:01:10.661927 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-09 01:01:10.661932 | orchestrator | Monday 09 March 2026 00:53:31 +0000 (0:00:00.781) 0:04:51.149 ********** 2026-03-09 01:01:10.661937 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.661942 | orchestrator | 2026-03-09 01:01:10.661965 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-09 01:01:10.661975 | orchestrator | Monday 09 March 2026 00:53:32 +0000 (0:00:00.176) 0:04:51.326 ********** 2026-03-09 01:01:10.661980 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-09 01:01:10.661985 | orchestrator | 2026-03-09 01:01:10.661990 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-09 01:01:10.661995 | orchestrator | Monday 09 March 2026 00:53:33 +0000 (0:00:00.985) 0:04:52.311 ********** 2026-03-09 01:01:10.662000 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.662005 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.662009 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.662033 | orchestrator | 2026-03-09 01:01:10.662038 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-09 01:01:10.662043 | orchestrator | Monday 09 March 2026 00:53:33 +0000 (0:00:00.362) 0:04:52.673 ********** 2026-03-09 01:01:10.662048 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.662053 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.662058 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.662063 | orchestrator | 2026-03-09 01:01:10.662067 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-09 01:01:10.662072 | orchestrator | Monday 09 March 2026 00:53:34 +0000 (0:00:00.698) 0:04:53.372 ********** 2026-03-09 01:01:10.662077 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.662082 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.662087 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.662092 | orchestrator | 2026-03-09 01:01:10.662097 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-09 01:01:10.662102 | orchestrator | Monday 09 March 2026 00:53:35 +0000 (0:00:01.442) 0:04:54.815 ********** 2026-03-09 01:01:10.662107 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.662112 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.662117 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.662122 | orchestrator | 2026-03-09 01:01:10.662127 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-09 01:01:10.662132 | orchestrator | Monday 09 March 2026 00:53:36 +0000 (0:00:00.991) 0:04:55.806 ********** 2026-03-09 01:01:10.662137 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.662142 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.662146 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.662151 | orchestrator | 2026-03-09 01:01:10.662156 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-09 01:01:10.662161 | orchestrator | Monday 09 March 2026 00:53:37 +0000 (0:00:00.728) 0:04:56.535 ********** 2026-03-09 01:01:10.662166 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.662171 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.662176 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.662181 | orchestrator | 2026-03-09 01:01:10.662186 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-09 01:01:10.662191 | orchestrator | Monday 09 March 2026 00:53:38 +0000 (0:00:00.766) 0:04:57.302 ********** 2026-03-09 01:01:10.662196 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.662201 | orchestrator | 2026-03-09 01:01:10.662206 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-09 01:01:10.662210 | orchestrator | Monday 09 March 2026 00:53:40 +0000 (0:00:02.009) 0:04:59.312 ********** 2026-03-09 01:01:10.662215 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.662220 | orchestrator | 2026-03-09 01:01:10.662225 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-09 01:01:10.662230 | orchestrator | Monday 09 March 2026 00:53:41 +0000 (0:00:00.992) 0:05:00.305 ********** 2026-03-09 01:01:10.662235 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:01:10.662240 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:01:10.662245 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-09 01:01:10.662250 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-09 01:01:10.662259 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 01:01:10.662264 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 01:01:10.662269 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-09 01:01:10.662273 | orchestrator | changed: [testbed-node-2 -> {{ item }}] 2026-03-09 01:01:10.662278 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 01:01:10.662283 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-03-09 01:01:10.662288 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 01:01:10.662293 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-03-09 01:01:10.662298 | orchestrator | 2026-03-09 01:01:10.662303 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-09 01:01:10.662308 | orchestrator | Monday 09 March 2026 00:53:44 +0000 (0:00:03.807) 0:05:04.112 ********** 2026-03-09 01:01:10.662312 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.662317 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.662322 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.662327 | orchestrator | 2026-03-09 01:01:10.662332 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-09 01:01:10.662337 | orchestrator | Monday 09 March 2026 00:53:46 +0000 (0:00:01.343) 0:05:05.456 ********** 2026-03-09 01:01:10.662342 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.662347 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.662352 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.662357 | orchestrator | 2026-03-09 01:01:10.662362 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-09 01:01:10.662367 | orchestrator | Monday 09 March 2026 00:53:46 +0000 (0:00:00.457) 0:05:05.914 ********** 2026-03-09 01:01:10.662372 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.662379 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.662384 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.662389 | orchestrator | 2026-03-09 01:01:10.662394 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-09 01:01:10.662399 | orchestrator | Monday 09 March 2026 00:53:47 +0000 (0:00:01.284) 0:05:07.198 ********** 2026-03-09 01:01:10.662421 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.662427 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.662432 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.662437 | orchestrator | 2026-03-09 01:01:10.662442 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-09 01:01:10.662447 | orchestrator | Monday 09 March 2026 00:53:51 +0000 (0:00:03.547) 0:05:10.745 ********** 2026-03-09 01:01:10.662452 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.662457 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.662462 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.662467 | orchestrator | 2026-03-09 01:01:10.662472 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-09 01:01:10.662477 | orchestrator | Monday 09 March 2026 00:53:53 +0000 (0:00:01.719) 0:05:12.464 ********** 2026-03-09 01:01:10.662482 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.662487 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.662492 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.662496 | orchestrator | 2026-03-09 01:01:10.662501 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-09 01:01:10.662506 | orchestrator | Monday 09 March 2026 00:53:53 +0000 (0:00:00.439) 0:05:12.904 ********** 2026-03-09 01:01:10.662511 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.662516 | orchestrator | 2026-03-09 01:01:10.662521 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-09 01:01:10.662526 | orchestrator | Monday 09 March 2026 00:53:54 +0000 (0:00:01.102) 0:05:14.007 ********** 2026-03-09 01:01:10.662534 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.662539 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.662544 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.662549 | orchestrator | 2026-03-09 01:01:10.662554 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-09 01:01:10.662559 | orchestrator | Monday 09 March 2026 00:53:55 +0000 (0:00:00.767) 0:05:14.774 ********** 2026-03-09 01:01:10.662563 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.662568 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.662573 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.662578 | orchestrator | 2026-03-09 01:01:10.662583 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-09 01:01:10.662588 | orchestrator | Monday 09 March 2026 00:53:56 +0000 (0:00:00.589) 0:05:15.364 ********** 2026-03-09 01:01:10.662592 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-09 01:01:10.662597 | orchestrator | 2026-03-09 01:01:10.662602 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-09 01:01:10.662607 | orchestrator | Monday 09 March 2026 00:53:57 +0000 (0:00:01.240) 0:05:16.605 ********** 2026-03-09 01:01:10.662612 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.662617 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.662622 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.662627 | orchestrator | 2026-03-09 01:01:10.662631 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-09 01:01:10.662636 | orchestrator | Monday 09 March 2026 00:54:00 +0000 (0:00:02.769) 0:05:19.374 ********** 2026-03-09 01:01:10.662641 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.662646 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.662651 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.662656 | orchestrator | 2026-03-09 01:01:10.662660 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-09 01:01:10.662665 | orchestrator | Monday 09 March 2026 00:54:01 +0000 (0:00:01.504) 0:05:20.879 ********** 2026-03-09 01:01:10.662683 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.662688 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.662693 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.662698 | orchestrator | 2026-03-09 01:01:10.662703 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-09 01:01:10.662708 | orchestrator | Monday 09 March 2026 00:54:03 +0000 (0:00:01.907) 0:05:22.787 ********** 2026-03-09 01:01:10.662713 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.662718 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.662723 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.662728 | orchestrator | 2026-03-09 01:01:10.662733 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-09 01:01:10.662738 | orchestrator | Monday 09 March 2026 00:54:05 +0000 (0:00:02.428) 0:05:25.215 ********** 2026-03-09 01:01:10.662743 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.662748 | orchestrator | 2026-03-09 01:01:10.662753 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-09 01:01:10.662758 | orchestrator | Monday 09 March 2026 00:54:06 +0000 (0:00:00.834) 0:05:26.050 ********** 2026-03-09 01:01:10.662763 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-09 01:01:10.662768 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.662773 | orchestrator | 2026-03-09 01:01:10.662777 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-09 01:01:10.662782 | orchestrator | Monday 09 March 2026 00:54:28 +0000 (0:00:22.103) 0:05:48.153 ********** 2026-03-09 01:01:10.662787 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.662792 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.662797 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.662808 | orchestrator | 2026-03-09 01:01:10.662813 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-09 01:01:10.662818 | orchestrator | Monday 09 March 2026 00:54:38 +0000 (0:00:09.711) 0:05:57.865 ********** 2026-03-09 01:01:10.662826 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.662831 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.662836 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.662841 | orchestrator | 2026-03-09 01:01:10.662846 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-09 01:01:10.662867 | orchestrator | Monday 09 March 2026 00:54:39 +0000 (0:00:00.643) 0:05:58.508 ********** 2026-03-09 01:01:10.662875 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2ed6f8ec24fcc0d2da942b52ff5a490891ec85df'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-09 01:01:10.662882 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2ed6f8ec24fcc0d2da942b52ff5a490891ec85df'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-09 01:01:10.662888 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2ed6f8ec24fcc0d2da942b52ff5a490891ec85df'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-09 01:01:10.662895 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2ed6f8ec24fcc0d2da942b52ff5a490891ec85df'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-09 01:01:10.662901 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2ed6f8ec24fcc0d2da942b52ff5a490891ec85df'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-09 01:01:10.662907 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2ed6f8ec24fcc0d2da942b52ff5a490891ec85df'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2ed6f8ec24fcc0d2da942b52ff5a490891ec85df'}])  2026-03-09 01:01:10.662914 | orchestrator | 2026-03-09 01:01:10.662919 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 01:01:10.662924 | orchestrator | Monday 09 March 2026 00:54:54 +0000 (0:00:15.233) 0:06:13.741 ********** 2026-03-09 01:01:10.662929 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.662933 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.662938 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.662943 | orchestrator | 2026-03-09 01:01:10.662948 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-09 01:01:10.662953 | orchestrator | Monday 09 March 2026 00:54:54 +0000 (0:00:00.356) 0:06:14.098 ********** 2026-03-09 01:01:10.662958 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.662966 | orchestrator | 2026-03-09 01:01:10.662971 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-09 01:01:10.662976 | orchestrator | Monday 09 March 2026 00:54:55 +0000 (0:00:00.867) 0:06:14.965 ********** 2026-03-09 01:01:10.662981 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.662985 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.662990 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.662995 | orchestrator | 2026-03-09 01:01:10.663000 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-09 01:01:10.663005 | orchestrator | Monday 09 March 2026 00:54:56 +0000 (0:00:00.357) 0:06:15.323 ********** 2026-03-09 01:01:10.663010 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.663015 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.663020 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.663024 | orchestrator | 2026-03-09 01:01:10.663029 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-09 01:01:10.663034 | orchestrator | Monday 09 March 2026 00:54:56 +0000 (0:00:00.393) 0:06:15.716 ********** 2026-03-09 01:01:10.663039 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 01:01:10.663044 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 01:01:10.663049 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 01:01:10.663057 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.663062 | orchestrator | 2026-03-09 01:01:10.663067 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-09 01:01:10.663072 | orchestrator | Monday 09 March 2026 00:54:57 +0000 (0:00:01.021) 0:06:16.738 ********** 2026-03-09 01:01:10.663077 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.663096 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.663102 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.663107 | orchestrator | 2026-03-09 01:01:10.663112 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-09 01:01:10.663117 | orchestrator | 2026-03-09 01:01:10.663122 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 01:01:10.663127 | orchestrator | Monday 09 March 2026 00:54:58 +0000 (0:00:00.928) 0:06:17.666 ********** 2026-03-09 01:01:10.663132 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.663137 | orchestrator | 2026-03-09 01:01:10.663142 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 01:01:10.663147 | orchestrator | Monday 09 March 2026 00:54:58 +0000 (0:00:00.536) 0:06:18.203 ********** 2026-03-09 01:01:10.663152 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.663157 | orchestrator | 2026-03-09 01:01:10.663162 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 01:01:10.663167 | orchestrator | Monday 09 March 2026 00:54:59 +0000 (0:00:00.903) 0:06:19.107 ********** 2026-03-09 01:01:10.663172 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.663177 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.663181 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.663186 | orchestrator | 2026-03-09 01:01:10.663191 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 01:01:10.663197 | orchestrator | Monday 09 March 2026 00:55:00 +0000 (0:00:00.846) 0:06:19.954 ********** 2026-03-09 01:01:10.663202 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.663207 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.663211 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.663216 | orchestrator | 2026-03-09 01:01:10.663221 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 01:01:10.663226 | orchestrator | Monday 09 March 2026 00:55:01 +0000 (0:00:00.390) 0:06:20.344 ********** 2026-03-09 01:01:10.663232 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.663240 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.663245 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.663250 | orchestrator | 2026-03-09 01:01:10.663255 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 01:01:10.663260 | orchestrator | Monday 09 March 2026 00:55:01 +0000 (0:00:00.678) 0:06:21.022 ********** 2026-03-09 01:01:10.663265 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.663270 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.663274 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.663279 | orchestrator | 2026-03-09 01:01:10.663284 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 01:01:10.663289 | orchestrator | Monday 09 March 2026 00:55:02 +0000 (0:00:00.387) 0:06:21.410 ********** 2026-03-09 01:01:10.663294 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.663299 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.663304 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.663309 | orchestrator | 2026-03-09 01:01:10.663314 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 01:01:10.663319 | orchestrator | Monday 09 March 2026 00:55:02 +0000 (0:00:00.740) 0:06:22.150 ********** 2026-03-09 01:01:10.663324 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.663329 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.663333 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.663338 | orchestrator | 2026-03-09 01:01:10.663343 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 01:01:10.663348 | orchestrator | Monday 09 March 2026 00:55:03 +0000 (0:00:00.372) 0:06:22.523 ********** 2026-03-09 01:01:10.663353 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.663358 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.663363 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.663368 | orchestrator | 2026-03-09 01:01:10.663373 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 01:01:10.663378 | orchestrator | Monday 09 March 2026 00:55:03 +0000 (0:00:00.655) 0:06:23.178 ********** 2026-03-09 01:01:10.663383 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.663388 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.663393 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.663398 | orchestrator | 2026-03-09 01:01:10.663403 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 01:01:10.663407 | orchestrator | Monday 09 March 2026 00:55:04 +0000 (0:00:00.797) 0:06:23.975 ********** 2026-03-09 01:01:10.663412 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.663417 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.663422 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.663427 | orchestrator | 2026-03-09 01:01:10.663431 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 01:01:10.663436 | orchestrator | Monday 09 March 2026 00:55:05 +0000 (0:00:00.826) 0:06:24.801 ********** 2026-03-09 01:01:10.663441 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.663446 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.663451 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.663456 | orchestrator | 2026-03-09 01:01:10.663461 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 01:01:10.663466 | orchestrator | Monday 09 March 2026 00:55:05 +0000 (0:00:00.370) 0:06:25.171 ********** 2026-03-09 01:01:10.663471 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.663476 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.663480 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.663485 | orchestrator | 2026-03-09 01:01:10.663490 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 01:01:10.663495 | orchestrator | Monday 09 March 2026 00:55:06 +0000 (0:00:00.706) 0:06:25.878 ********** 2026-03-09 01:01:10.663503 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.663508 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.663513 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.663521 | orchestrator | 2026-03-09 01:01:10.663526 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 01:01:10.663546 | orchestrator | Monday 09 March 2026 00:55:06 +0000 (0:00:00.363) 0:06:26.242 ********** 2026-03-09 01:01:10.663552 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.663557 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.663562 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.663567 | orchestrator | 2026-03-09 01:01:10.663572 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 01:01:10.663577 | orchestrator | Monday 09 March 2026 00:55:07 +0000 (0:00:00.385) 0:06:26.627 ********** 2026-03-09 01:01:10.663582 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.663587 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.663592 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.663597 | orchestrator | 2026-03-09 01:01:10.663602 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 01:01:10.663607 | orchestrator | Monday 09 March 2026 00:55:07 +0000 (0:00:00.350) 0:06:26.978 ********** 2026-03-09 01:01:10.663612 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.663616 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.663621 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.663626 | orchestrator | 2026-03-09 01:01:10.663631 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 01:01:10.663636 | orchestrator | Monday 09 March 2026 00:55:08 +0000 (0:00:00.353) 0:06:27.331 ********** 2026-03-09 01:01:10.663641 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.663646 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.663651 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.663655 | orchestrator | 2026-03-09 01:01:10.663660 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 01:01:10.663666 | orchestrator | Monday 09 March 2026 00:55:08 +0000 (0:00:00.618) 0:06:27.950 ********** 2026-03-09 01:01:10.663680 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.663685 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.663690 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.663694 | orchestrator | 2026-03-09 01:01:10.663700 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 01:01:10.663705 | orchestrator | Monday 09 March 2026 00:55:09 +0000 (0:00:00.379) 0:06:28.329 ********** 2026-03-09 01:01:10.663710 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.663715 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.663720 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.663724 | orchestrator | 2026-03-09 01:01:10.663729 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 01:01:10.663734 | orchestrator | Monday 09 March 2026 00:55:09 +0000 (0:00:00.390) 0:06:28.720 ********** 2026-03-09 01:01:10.663739 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.663744 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.663749 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.663754 | orchestrator | 2026-03-09 01:01:10.663759 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-09 01:01:10.663764 | orchestrator | Monday 09 March 2026 00:55:10 +0000 (0:00:00.910) 0:06:29.631 ********** 2026-03-09 01:01:10.663768 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-09 01:01:10.663773 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 01:01:10.663778 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 01:01:10.663783 | orchestrator | 2026-03-09 01:01:10.663788 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-09 01:01:10.663793 | orchestrator | Monday 09 March 2026 00:55:11 +0000 (0:00:00.742) 0:06:30.373 ********** 2026-03-09 01:01:10.663798 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.663807 | orchestrator | 2026-03-09 01:01:10.663812 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-09 01:01:10.663817 | orchestrator | Monday 09 March 2026 00:55:11 +0000 (0:00:00.589) 0:06:30.962 ********** 2026-03-09 01:01:10.663821 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.663826 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.663831 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.663836 | orchestrator | 2026-03-09 01:01:10.663841 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-09 01:01:10.663846 | orchestrator | Monday 09 March 2026 00:55:12 +0000 (0:00:00.808) 0:06:31.771 ********** 2026-03-09 01:01:10.663851 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.663856 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.663861 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.663865 | orchestrator | 2026-03-09 01:01:10.663870 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-09 01:01:10.663875 | orchestrator | Monday 09 March 2026 00:55:13 +0000 (0:00:00.653) 0:06:32.424 ********** 2026-03-09 01:01:10.663880 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 01:01:10.663885 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 01:01:10.663890 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 01:01:10.663895 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-09 01:01:10.663900 | orchestrator | 2026-03-09 01:01:10.663905 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-09 01:01:10.663910 | orchestrator | Monday 09 March 2026 00:55:24 +0000 (0:00:10.967) 0:06:43.391 ********** 2026-03-09 01:01:10.663915 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.663920 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.663925 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.663930 | orchestrator | 2026-03-09 01:01:10.663935 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-09 01:01:10.663940 | orchestrator | Monday 09 March 2026 00:55:24 +0000 (0:00:00.445) 0:06:43.837 ********** 2026-03-09 01:01:10.663945 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-09 01:01:10.663954 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-09 01:01:10.663959 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-09 01:01:10.663964 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-09 01:01:10.663969 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:01:10.663990 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:01:10.663996 | orchestrator | 2026-03-09 01:01:10.664001 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-09 01:01:10.664006 | orchestrator | Monday 09 March 2026 00:55:26 +0000 (0:00:02.185) 0:06:46.022 ********** 2026-03-09 01:01:10.664011 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-09 01:01:10.664016 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-09 01:01:10.664021 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-09 01:01:10.664026 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 01:01:10.664031 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-09 01:01:10.664035 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-09 01:01:10.664040 | orchestrator | 2026-03-09 01:01:10.664045 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-09 01:01:10.664050 | orchestrator | Monday 09 March 2026 00:55:28 +0000 (0:00:01.289) 0:06:47.312 ********** 2026-03-09 01:01:10.664055 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.664060 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.664065 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.664070 | orchestrator | 2026-03-09 01:01:10.664075 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-09 01:01:10.664080 | orchestrator | Monday 09 March 2026 00:55:29 +0000 (0:00:01.179) 0:06:48.491 ********** 2026-03-09 01:01:10.664088 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.664093 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.664098 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.664103 | orchestrator | 2026-03-09 01:01:10.664108 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-09 01:01:10.664112 | orchestrator | Monday 09 March 2026 00:55:29 +0000 (0:00:00.343) 0:06:48.835 ********** 2026-03-09 01:01:10.664117 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.664122 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.664127 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.664132 | orchestrator | 2026-03-09 01:01:10.664137 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-09 01:01:10.664142 | orchestrator | Monday 09 March 2026 00:55:29 +0000 (0:00:00.366) 0:06:49.202 ********** 2026-03-09 01:01:10.664147 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.664152 | orchestrator | 2026-03-09 01:01:10.664157 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-09 01:01:10.664161 | orchestrator | Monday 09 March 2026 00:55:30 +0000 (0:00:00.945) 0:06:50.147 ********** 2026-03-09 01:01:10.664166 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.664171 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.664176 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.664181 | orchestrator | 2026-03-09 01:01:10.664186 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-09 01:01:10.664191 | orchestrator | Monday 09 March 2026 00:55:31 +0000 (0:00:00.384) 0:06:50.532 ********** 2026-03-09 01:01:10.664196 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.664201 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.664205 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.664210 | orchestrator | 2026-03-09 01:01:10.664215 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-09 01:01:10.664220 | orchestrator | Monday 09 March 2026 00:55:31 +0000 (0:00:00.376) 0:06:50.908 ********** 2026-03-09 01:01:10.664225 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.664230 | orchestrator | 2026-03-09 01:01:10.664235 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-09 01:01:10.664240 | orchestrator | Monday 09 March 2026 00:55:32 +0000 (0:00:00.781) 0:06:51.690 ********** 2026-03-09 01:01:10.664244 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.664249 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.664254 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.664259 | orchestrator | 2026-03-09 01:01:10.664264 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-09 01:01:10.664269 | orchestrator | Monday 09 March 2026 00:55:33 +0000 (0:00:01.355) 0:06:53.045 ********** 2026-03-09 01:01:10.664273 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.664278 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.664283 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.664288 | orchestrator | 2026-03-09 01:01:10.664293 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-09 01:01:10.664298 | orchestrator | Monday 09 March 2026 00:55:34 +0000 (0:00:01.114) 0:06:54.160 ********** 2026-03-09 01:01:10.664303 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.664308 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.664312 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.664317 | orchestrator | 2026-03-09 01:01:10.664322 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-09 01:01:10.664327 | orchestrator | Monday 09 March 2026 00:55:36 +0000 (0:00:01.734) 0:06:55.894 ********** 2026-03-09 01:01:10.664332 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.664337 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.664356 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.664361 | orchestrator | 2026-03-09 01:01:10.664366 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-09 01:01:10.664371 | orchestrator | Monday 09 March 2026 00:55:38 +0000 (0:00:02.135) 0:06:58.029 ********** 2026-03-09 01:01:10.664375 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.664380 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.664388 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-09 01:01:10.664393 | orchestrator | 2026-03-09 01:01:10.664398 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-09 01:01:10.664403 | orchestrator | Monday 09 March 2026 00:55:39 +0000 (0:00:00.460) 0:06:58.490 ********** 2026-03-09 01:01:10.664424 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-09 01:01:10.664431 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-09 01:01:10.664436 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-09 01:01:10.664441 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-09 01:01:10.664445 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-09 01:01:10.664451 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:01:10.664456 | orchestrator | 2026-03-09 01:01:10.664461 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-09 01:01:10.664465 | orchestrator | Monday 09 March 2026 00:56:09 +0000 (0:00:30.523) 0:07:29.013 ********** 2026-03-09 01:01:10.664470 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:01:10.664475 | orchestrator | 2026-03-09 01:01:10.664480 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-09 01:01:10.664485 | orchestrator | Monday 09 March 2026 00:56:11 +0000 (0:00:01.346) 0:07:30.360 ********** 2026-03-09 01:01:10.664490 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.664495 | orchestrator | 2026-03-09 01:01:10.664500 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-09 01:01:10.664505 | orchestrator | Monday 09 March 2026 00:56:11 +0000 (0:00:00.349) 0:07:30.709 ********** 2026-03-09 01:01:10.664510 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.664515 | orchestrator | 2026-03-09 01:01:10.664519 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-09 01:01:10.664524 | orchestrator | Monday 09 March 2026 00:56:11 +0000 (0:00:00.164) 0:07:30.874 ********** 2026-03-09 01:01:10.664530 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-09 01:01:10.664535 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-09 01:01:10.664539 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-09 01:01:10.664544 | orchestrator | 2026-03-09 01:01:10.664549 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-09 01:01:10.664554 | orchestrator | Monday 09 March 2026 00:56:18 +0000 (0:00:06.716) 0:07:37.590 ********** 2026-03-09 01:01:10.664559 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-09 01:01:10.664564 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-09 01:01:10.664569 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-09 01:01:10.664574 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-09 01:01:10.664579 | orchestrator | 2026-03-09 01:01:10.664584 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 01:01:10.664589 | orchestrator | Monday 09 March 2026 00:56:23 +0000 (0:00:05.412) 0:07:43.003 ********** 2026-03-09 01:01:10.664597 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.664602 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.664607 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.664611 | orchestrator | 2026-03-09 01:01:10.664616 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-09 01:01:10.664621 | orchestrator | Monday 09 March 2026 00:56:24 +0000 (0:00:00.728) 0:07:43.731 ********** 2026-03-09 01:01:10.664626 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.664631 | orchestrator | 2026-03-09 01:01:10.664636 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-09 01:01:10.664641 | orchestrator | Monday 09 March 2026 00:56:25 +0000 (0:00:00.945) 0:07:44.677 ********** 2026-03-09 01:01:10.664646 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.664651 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.664656 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.664661 | orchestrator | 2026-03-09 01:01:10.664666 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-09 01:01:10.664699 | orchestrator | Monday 09 March 2026 00:56:25 +0000 (0:00:00.379) 0:07:45.057 ********** 2026-03-09 01:01:10.664704 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.664709 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.664714 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.664719 | orchestrator | 2026-03-09 01:01:10.664724 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-09 01:01:10.664729 | orchestrator | Monday 09 March 2026 00:56:27 +0000 (0:00:01.215) 0:07:46.272 ********** 2026-03-09 01:01:10.664734 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 01:01:10.664739 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 01:01:10.664744 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 01:01:10.664749 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.664754 | orchestrator | 2026-03-09 01:01:10.664759 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-09 01:01:10.664767 | orchestrator | Monday 09 March 2026 00:56:27 +0000 (0:00:00.648) 0:07:46.920 ********** 2026-03-09 01:01:10.664775 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.664783 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.664791 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.664798 | orchestrator | 2026-03-09 01:01:10.664809 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-09 01:01:10.664818 | orchestrator | 2026-03-09 01:01:10.664825 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 01:01:10.664832 | orchestrator | Monday 09 March 2026 00:56:28 +0000 (0:00:00.925) 0:07:47.845 ********** 2026-03-09 01:01:10.664864 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.664874 | orchestrator | 2026-03-09 01:01:10.664880 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 01:01:10.664885 | orchestrator | Monday 09 March 2026 00:56:29 +0000 (0:00:00.522) 0:07:48.368 ********** 2026-03-09 01:01:10.664890 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.664894 | orchestrator | 2026-03-09 01:01:10.664899 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 01:01:10.664904 | orchestrator | Monday 09 March 2026 00:56:29 +0000 (0:00:00.863) 0:07:49.231 ********** 2026-03-09 01:01:10.664909 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.664914 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.664919 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.664924 | orchestrator | 2026-03-09 01:01:10.664929 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 01:01:10.664940 | orchestrator | Monday 09 March 2026 00:56:30 +0000 (0:00:00.354) 0:07:49.586 ********** 2026-03-09 01:01:10.664945 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.664950 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.664955 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.664960 | orchestrator | 2026-03-09 01:01:10.664964 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 01:01:10.664969 | orchestrator | Monday 09 March 2026 00:56:31 +0000 (0:00:00.754) 0:07:50.341 ********** 2026-03-09 01:01:10.664974 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.664979 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.664984 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.664989 | orchestrator | 2026-03-09 01:01:10.664994 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 01:01:10.664998 | orchestrator | Monday 09 March 2026 00:56:31 +0000 (0:00:00.749) 0:07:51.090 ********** 2026-03-09 01:01:10.665003 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.665008 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.665013 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.665018 | orchestrator | 2026-03-09 01:01:10.665022 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 01:01:10.665027 | orchestrator | Monday 09 March 2026 00:56:33 +0000 (0:00:01.376) 0:07:52.467 ********** 2026-03-09 01:01:10.665032 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.665037 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.665042 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.665047 | orchestrator | 2026-03-09 01:01:10.665052 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 01:01:10.665057 | orchestrator | Monday 09 March 2026 00:56:33 +0000 (0:00:00.390) 0:07:52.858 ********** 2026-03-09 01:01:10.665062 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.665067 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.665071 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.665076 | orchestrator | 2026-03-09 01:01:10.665081 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 01:01:10.665086 | orchestrator | Monday 09 March 2026 00:56:33 +0000 (0:00:00.377) 0:07:53.236 ********** 2026-03-09 01:01:10.665091 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.665096 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.665101 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.665105 | orchestrator | 2026-03-09 01:01:10.665110 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 01:01:10.665115 | orchestrator | Monday 09 March 2026 00:56:34 +0000 (0:00:00.359) 0:07:53.595 ********** 2026-03-09 01:01:10.665120 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.665125 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.665130 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.665134 | orchestrator | 2026-03-09 01:01:10.665139 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 01:01:10.665144 | orchestrator | Monday 09 March 2026 00:56:35 +0000 (0:00:01.044) 0:07:54.640 ********** 2026-03-09 01:01:10.665148 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.665153 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.665157 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.665162 | orchestrator | 2026-03-09 01:01:10.665166 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 01:01:10.665171 | orchestrator | Monday 09 March 2026 00:56:36 +0000 (0:00:00.767) 0:07:55.408 ********** 2026-03-09 01:01:10.665176 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.665180 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.665185 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.665189 | orchestrator | 2026-03-09 01:01:10.665194 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 01:01:10.665199 | orchestrator | Monday 09 March 2026 00:56:36 +0000 (0:00:00.346) 0:07:55.754 ********** 2026-03-09 01:01:10.665203 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.665211 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.665216 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.665220 | orchestrator | 2026-03-09 01:01:10.665225 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 01:01:10.665230 | orchestrator | Monday 09 March 2026 00:56:36 +0000 (0:00:00.306) 0:07:56.060 ********** 2026-03-09 01:01:10.665234 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.665239 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.665244 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.665248 | orchestrator | 2026-03-09 01:01:10.665253 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 01:01:10.665257 | orchestrator | Monday 09 March 2026 00:56:37 +0000 (0:00:00.656) 0:07:56.717 ********** 2026-03-09 01:01:10.665262 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.665267 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.665274 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.665279 | orchestrator | 2026-03-09 01:01:10.665283 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 01:01:10.665288 | orchestrator | Monday 09 March 2026 00:56:37 +0000 (0:00:00.365) 0:07:57.082 ********** 2026-03-09 01:01:10.665293 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.665297 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.665317 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.665323 | orchestrator | 2026-03-09 01:01:10.665327 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 01:01:10.665332 | orchestrator | Monday 09 March 2026 00:56:38 +0000 (0:00:00.393) 0:07:57.476 ********** 2026-03-09 01:01:10.665337 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.665341 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.665346 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.665351 | orchestrator | 2026-03-09 01:01:10.665355 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 01:01:10.665360 | orchestrator | Monday 09 March 2026 00:56:38 +0000 (0:00:00.351) 0:07:57.828 ********** 2026-03-09 01:01:10.665365 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.665369 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.665374 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.665378 | orchestrator | 2026-03-09 01:01:10.665383 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 01:01:10.665388 | orchestrator | Monday 09 March 2026 00:56:39 +0000 (0:00:00.618) 0:07:58.446 ********** 2026-03-09 01:01:10.665392 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.665397 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.665401 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.665406 | orchestrator | 2026-03-09 01:01:10.665411 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 01:01:10.665415 | orchestrator | Monday 09 March 2026 00:56:39 +0000 (0:00:00.338) 0:07:58.785 ********** 2026-03-09 01:01:10.665420 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.665425 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.665429 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.665434 | orchestrator | 2026-03-09 01:01:10.665438 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 01:01:10.665443 | orchestrator | Monday 09 March 2026 00:56:39 +0000 (0:00:00.346) 0:07:59.132 ********** 2026-03-09 01:01:10.665448 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.665452 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.665457 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.665461 | orchestrator | 2026-03-09 01:01:10.665466 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-09 01:01:10.665471 | orchestrator | Monday 09 March 2026 00:56:40 +0000 (0:00:00.896) 0:08:00.028 ********** 2026-03-09 01:01:10.665475 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.665480 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.665485 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.665493 | orchestrator | 2026-03-09 01:01:10.665497 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-09 01:01:10.665502 | orchestrator | Monday 09 March 2026 00:56:41 +0000 (0:00:00.524) 0:08:00.553 ********** 2026-03-09 01:01:10.665507 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 01:01:10.665512 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 01:01:10.665516 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 01:01:10.665521 | orchestrator | 2026-03-09 01:01:10.665526 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-09 01:01:10.665530 | orchestrator | Monday 09 March 2026 00:56:41 +0000 (0:00:00.674) 0:08:01.228 ********** 2026-03-09 01:01:10.665535 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.665540 | orchestrator | 2026-03-09 01:01:10.665544 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-09 01:01:10.665549 | orchestrator | Monday 09 March 2026 00:56:42 +0000 (0:00:00.608) 0:08:01.836 ********** 2026-03-09 01:01:10.665554 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.665558 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.665563 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.665567 | orchestrator | 2026-03-09 01:01:10.665572 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-09 01:01:10.665577 | orchestrator | Monday 09 March 2026 00:56:43 +0000 (0:00:00.672) 0:08:02.508 ********** 2026-03-09 01:01:10.665582 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.665586 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.665591 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.665595 | orchestrator | 2026-03-09 01:01:10.665600 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-09 01:01:10.665605 | orchestrator | Monday 09 March 2026 00:56:43 +0000 (0:00:00.359) 0:08:02.868 ********** 2026-03-09 01:01:10.665609 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.665614 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.665619 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.665623 | orchestrator | 2026-03-09 01:01:10.665628 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-09 01:01:10.665633 | orchestrator | Monday 09 March 2026 00:56:44 +0000 (0:00:00.749) 0:08:03.618 ********** 2026-03-09 01:01:10.665637 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.665642 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.665646 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.665651 | orchestrator | 2026-03-09 01:01:10.665656 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-09 01:01:10.665660 | orchestrator | Monday 09 March 2026 00:56:44 +0000 (0:00:00.365) 0:08:03.983 ********** 2026-03-09 01:01:10.665665 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-09 01:01:10.665688 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-09 01:01:10.665695 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-09 01:01:10.665700 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-09 01:01:10.665705 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-09 01:01:10.665713 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-09 01:01:10.665717 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-09 01:01:10.665722 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-09 01:01:10.665727 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-09 01:01:10.665734 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-09 01:01:10.665739 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-09 01:01:10.665744 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-09 01:01:10.665748 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-09 01:01:10.665753 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-09 01:01:10.665757 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-09 01:01:10.665762 | orchestrator | 2026-03-09 01:01:10.665767 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-09 01:01:10.665771 | orchestrator | Monday 09 March 2026 00:56:47 +0000 (0:00:02.440) 0:08:06.424 ********** 2026-03-09 01:01:10.665776 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.665780 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.665785 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.665790 | orchestrator | 2026-03-09 01:01:10.665794 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-09 01:01:10.665799 | orchestrator | Monday 09 March 2026 00:56:47 +0000 (0:00:00.344) 0:08:06.768 ********** 2026-03-09 01:01:10.665803 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.665808 | orchestrator | 2026-03-09 01:01:10.665813 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-09 01:01:10.665817 | orchestrator | Monday 09 March 2026 00:56:48 +0000 (0:00:00.554) 0:08:07.323 ********** 2026-03-09 01:01:10.665822 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-09 01:01:10.665827 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-09 01:01:10.665831 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-09 01:01:10.665836 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-09 01:01:10.665840 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-09 01:01:10.665845 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-09 01:01:10.665850 | orchestrator | 2026-03-09 01:01:10.665854 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-09 01:01:10.665859 | orchestrator | Monday 09 March 2026 00:56:49 +0000 (0:00:01.420) 0:08:08.744 ********** 2026-03-09 01:01:10.665864 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:01:10.665868 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 01:01:10.665873 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 01:01:10.665878 | orchestrator | 2026-03-09 01:01:10.665882 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-09 01:01:10.665887 | orchestrator | Monday 09 March 2026 00:56:51 +0000 (0:00:02.247) 0:08:10.991 ********** 2026-03-09 01:01:10.665892 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 01:01:10.665896 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 01:01:10.665901 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.665906 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 01:01:10.665910 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-09 01:01:10.665915 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.665919 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 01:01:10.665924 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-09 01:01:10.665929 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.665933 | orchestrator | 2026-03-09 01:01:10.665938 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-09 01:01:10.665946 | orchestrator | Monday 09 March 2026 00:56:52 +0000 (0:00:01.205) 0:08:12.197 ********** 2026-03-09 01:01:10.665951 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:01:10.665955 | orchestrator | 2026-03-09 01:01:10.665961 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-09 01:01:10.665969 | orchestrator | Monday 09 March 2026 00:56:55 +0000 (0:00:02.121) 0:08:14.319 ********** 2026-03-09 01:01:10.665976 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.665983 | orchestrator | 2026-03-09 01:01:10.665990 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-09 01:01:10.665997 | orchestrator | Monday 09 March 2026 00:56:55 +0000 (0:00:00.865) 0:08:15.184 ********** 2026-03-09 01:01:10.666004 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a76ca51e-4549-54be-bcb5-a2c49bca5f85', 'data_vg': 'ceph-a76ca51e-4549-54be-bcb5-a2c49bca5f85'}) 2026-03-09 01:01:10.666040 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd', 'data_vg': 'ceph-2e0d7a52-9ca0-5b92-a6d3-76d99ccb83bd'}) 2026-03-09 01:01:10.666058 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0', 'data_vg': 'ceph-330a9702-ab5a-5bf7-9b95-ebb8b4c554e0'}) 2026-03-09 01:01:10.666066 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-30c2fd4e-0770-5a21-8e5f-9ea8386abee3', 'data_vg': 'ceph-30c2fd4e-0770-5a21-8e5f-9ea8386abee3'}) 2026-03-09 01:01:10.666074 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bfced398-94c6-51d2-a38a-d9d8acf734fd', 'data_vg': 'ceph-bfced398-94c6-51d2-a38a-d9d8acf734fd'}) 2026-03-09 01:01:10.666082 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1060daf8-ac1b-51e4-8c2b-8176ae449cc2', 'data_vg': 'ceph-1060daf8-ac1b-51e4-8c2b-8176ae449cc2'}) 2026-03-09 01:01:10.666090 | orchestrator | 2026-03-09 01:01:10.666097 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-09 01:01:10.666101 | orchestrator | Monday 09 March 2026 00:57:36 +0000 (0:00:40.467) 0:08:55.651 ********** 2026-03-09 01:01:10.666106 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666111 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.666115 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.666120 | orchestrator | 2026-03-09 01:01:10.666125 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-09 01:01:10.666129 | orchestrator | Monday 09 March 2026 00:57:36 +0000 (0:00:00.560) 0:08:56.211 ********** 2026-03-09 01:01:10.666134 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.666139 | orchestrator | 2026-03-09 01:01:10.666143 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-09 01:01:10.666148 | orchestrator | Monday 09 March 2026 00:57:38 +0000 (0:00:01.303) 0:08:57.515 ********** 2026-03-09 01:01:10.666152 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.666157 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.666161 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.666166 | orchestrator | 2026-03-09 01:01:10.666171 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-09 01:01:10.666175 | orchestrator | Monday 09 March 2026 00:57:39 +0000 (0:00:00.817) 0:08:58.333 ********** 2026-03-09 01:01:10.666180 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.666184 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.666189 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.666194 | orchestrator | 2026-03-09 01:01:10.666198 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-09 01:01:10.666203 | orchestrator | Monday 09 March 2026 00:57:41 +0000 (0:00:02.772) 0:09:01.105 ********** 2026-03-09 01:01:10.666207 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.666216 | orchestrator | 2026-03-09 01:01:10.666221 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-09 01:01:10.666226 | orchestrator | Monday 09 March 2026 00:57:42 +0000 (0:00:00.864) 0:09:01.970 ********** 2026-03-09 01:01:10.666230 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.666235 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.666239 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.666244 | orchestrator | 2026-03-09 01:01:10.666249 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-09 01:01:10.666253 | orchestrator | Monday 09 March 2026 00:57:43 +0000 (0:00:01.206) 0:09:03.176 ********** 2026-03-09 01:01:10.666258 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.666262 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.666267 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.666272 | orchestrator | 2026-03-09 01:01:10.666276 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-09 01:01:10.666281 | orchestrator | Monday 09 March 2026 00:57:45 +0000 (0:00:01.231) 0:09:04.408 ********** 2026-03-09 01:01:10.666286 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.666290 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.666295 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.666299 | orchestrator | 2026-03-09 01:01:10.666304 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-09 01:01:10.666309 | orchestrator | Monday 09 March 2026 00:57:47 +0000 (0:00:01.905) 0:09:06.314 ********** 2026-03-09 01:01:10.666313 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666318 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.666322 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.666327 | orchestrator | 2026-03-09 01:01:10.666332 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-09 01:01:10.666336 | orchestrator | Monday 09 March 2026 00:57:47 +0000 (0:00:00.657) 0:09:06.971 ********** 2026-03-09 01:01:10.666341 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666346 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.666350 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.666355 | orchestrator | 2026-03-09 01:01:10.666359 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-09 01:01:10.666364 | orchestrator | Monday 09 March 2026 00:57:48 +0000 (0:00:00.380) 0:09:07.351 ********** 2026-03-09 01:01:10.666369 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-09 01:01:10.666373 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-09 01:01:10.666378 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-03-09 01:01:10.666382 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-09 01:01:10.666387 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-03-09 01:01:10.666392 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-09 01:01:10.666396 | orchestrator | 2026-03-09 01:01:10.666401 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-09 01:01:10.666406 | orchestrator | Monday 09 March 2026 00:57:49 +0000 (0:00:01.023) 0:09:08.374 ********** 2026-03-09 01:01:10.666415 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-09 01:01:10.666420 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-09 01:01:10.666425 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-09 01:01:10.666430 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-09 01:01:10.666438 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-09 01:01:10.666443 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-09 01:01:10.666448 | orchestrator | 2026-03-09 01:01:10.666452 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-09 01:01:10.666457 | orchestrator | Monday 09 March 2026 00:57:51 +0000 (0:00:02.204) 0:09:10.578 ********** 2026-03-09 01:01:10.666462 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-09 01:01:10.666466 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-09 01:01:10.666471 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-09 01:01:10.666479 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-09 01:01:10.666484 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-09 01:01:10.666488 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-09 01:01:10.666493 | orchestrator | 2026-03-09 01:01:10.666497 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-09 01:01:10.666502 | orchestrator | Monday 09 March 2026 00:57:55 +0000 (0:00:04.019) 0:09:14.598 ********** 2026-03-09 01:01:10.666507 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666511 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.666516 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:01:10.666521 | orchestrator | 2026-03-09 01:01:10.666525 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-09 01:01:10.666530 | orchestrator | Monday 09 March 2026 00:57:57 +0000 (0:00:02.513) 0:09:17.111 ********** 2026-03-09 01:01:10.666535 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666539 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.666544 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-09 01:01:10.666549 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:01:10.666553 | orchestrator | 2026-03-09 01:01:10.666558 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-09 01:01:10.666562 | orchestrator | Monday 09 March 2026 00:58:10 +0000 (0:00:12.693) 0:09:29.805 ********** 2026-03-09 01:01:10.666567 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666572 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.666576 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.666581 | orchestrator | 2026-03-09 01:01:10.666586 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 01:01:10.666590 | orchestrator | Monday 09 March 2026 00:58:11 +0000 (0:00:01.298) 0:09:31.104 ********** 2026-03-09 01:01:10.666595 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666599 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.666604 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.666608 | orchestrator | 2026-03-09 01:01:10.666613 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-09 01:01:10.666618 | orchestrator | Monday 09 March 2026 00:58:12 +0000 (0:00:00.385) 0:09:31.489 ********** 2026-03-09 01:01:10.666622 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.666627 | orchestrator | 2026-03-09 01:01:10.666632 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-09 01:01:10.666636 | orchestrator | Monday 09 March 2026 00:58:12 +0000 (0:00:00.574) 0:09:32.064 ********** 2026-03-09 01:01:10.666641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:01:10.666645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:01:10.666650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:01:10.666654 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666659 | orchestrator | 2026-03-09 01:01:10.666664 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-09 01:01:10.666679 | orchestrator | Monday 09 March 2026 00:58:13 +0000 (0:00:01.112) 0:09:33.177 ********** 2026-03-09 01:01:10.666684 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666689 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.666694 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.666698 | orchestrator | 2026-03-09 01:01:10.666703 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-09 01:01:10.666708 | orchestrator | Monday 09 March 2026 00:58:14 +0000 (0:00:00.381) 0:09:33.558 ********** 2026-03-09 01:01:10.666712 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666717 | orchestrator | 2026-03-09 01:01:10.666725 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-09 01:01:10.666729 | orchestrator | Monday 09 March 2026 00:58:14 +0000 (0:00:00.300) 0:09:33.858 ********** 2026-03-09 01:01:10.666734 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666738 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.666743 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.666748 | orchestrator | 2026-03-09 01:01:10.666752 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-09 01:01:10.666757 | orchestrator | Monday 09 March 2026 00:58:14 +0000 (0:00:00.354) 0:09:34.213 ********** 2026-03-09 01:01:10.666762 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666766 | orchestrator | 2026-03-09 01:01:10.666771 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-09 01:01:10.666776 | orchestrator | Monday 09 March 2026 00:58:15 +0000 (0:00:00.241) 0:09:34.454 ********** 2026-03-09 01:01:10.666780 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666785 | orchestrator | 2026-03-09 01:01:10.666790 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-09 01:01:10.666794 | orchestrator | Monday 09 March 2026 00:58:15 +0000 (0:00:00.269) 0:09:34.724 ********** 2026-03-09 01:01:10.666799 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666803 | orchestrator | 2026-03-09 01:01:10.666811 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-09 01:01:10.666816 | orchestrator | Monday 09 March 2026 00:58:15 +0000 (0:00:00.142) 0:09:34.867 ********** 2026-03-09 01:01:10.666820 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666825 | orchestrator | 2026-03-09 01:01:10.666832 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-09 01:01:10.666837 | orchestrator | Monday 09 March 2026 00:58:15 +0000 (0:00:00.226) 0:09:35.093 ********** 2026-03-09 01:01:10.666842 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666846 | orchestrator | 2026-03-09 01:01:10.666851 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-09 01:01:10.666855 | orchestrator | Monday 09 March 2026 00:58:16 +0000 (0:00:00.890) 0:09:35.984 ********** 2026-03-09 01:01:10.666860 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:01:10.666865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:01:10.666869 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:01:10.666874 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666878 | orchestrator | 2026-03-09 01:01:10.666883 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-09 01:01:10.666888 | orchestrator | Monday 09 March 2026 00:58:17 +0000 (0:00:00.461) 0:09:36.445 ********** 2026-03-09 01:01:10.666892 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666897 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.666901 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.666906 | orchestrator | 2026-03-09 01:01:10.666911 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-09 01:01:10.666915 | orchestrator | Monday 09 March 2026 00:58:17 +0000 (0:00:00.339) 0:09:36.784 ********** 2026-03-09 01:01:10.666920 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666925 | orchestrator | 2026-03-09 01:01:10.666929 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-09 01:01:10.666934 | orchestrator | Monday 09 March 2026 00:58:17 +0000 (0:00:00.275) 0:09:37.060 ********** 2026-03-09 01:01:10.666939 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.666943 | orchestrator | 2026-03-09 01:01:10.666948 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-09 01:01:10.666952 | orchestrator | 2026-03-09 01:01:10.666957 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 01:01:10.666962 | orchestrator | Monday 09 March 2026 00:58:18 +0000 (0:00:01.032) 0:09:38.093 ********** 2026-03-09 01:01:10.666967 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.666975 | orchestrator | 2026-03-09 01:01:10.666979 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 01:01:10.666984 | orchestrator | Monday 09 March 2026 00:58:20 +0000 (0:00:01.362) 0:09:39.456 ********** 2026-03-09 01:01:10.666989 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.666994 | orchestrator | 2026-03-09 01:01:10.666998 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 01:01:10.667003 | orchestrator | Monday 09 March 2026 00:58:21 +0000 (0:00:01.114) 0:09:40.570 ********** 2026-03-09 01:01:10.667007 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.667012 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.667017 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.667021 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.667026 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.667031 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.667035 | orchestrator | 2026-03-09 01:01:10.667040 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 01:01:10.667044 | orchestrator | Monday 09 March 2026 00:58:22 +0000 (0:00:01.378) 0:09:41.948 ********** 2026-03-09 01:01:10.667049 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.667054 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.667058 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.667063 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.667068 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.667072 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.667077 | orchestrator | 2026-03-09 01:01:10.667082 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 01:01:10.667086 | orchestrator | Monday 09 March 2026 00:58:23 +0000 (0:00:00.704) 0:09:42.653 ********** 2026-03-09 01:01:10.667091 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.667095 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.667100 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.667105 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.667109 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.667114 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.667118 | orchestrator | 2026-03-09 01:01:10.667123 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 01:01:10.667128 | orchestrator | Monday 09 March 2026 00:58:24 +0000 (0:00:01.126) 0:09:43.779 ********** 2026-03-09 01:01:10.667133 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.667137 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.667142 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.667147 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.667151 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.667156 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.667160 | orchestrator | 2026-03-09 01:01:10.667165 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 01:01:10.667170 | orchestrator | Monday 09 March 2026 00:58:25 +0000 (0:00:00.747) 0:09:44.527 ********** 2026-03-09 01:01:10.667174 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.667179 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.667183 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.667188 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.667193 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.667197 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.667202 | orchestrator | 2026-03-09 01:01:10.667209 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 01:01:10.667214 | orchestrator | Monday 09 March 2026 00:58:26 +0000 (0:00:01.347) 0:09:45.875 ********** 2026-03-09 01:01:10.667219 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.667227 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.667235 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.667240 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.667244 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.667249 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.667254 | orchestrator | 2026-03-09 01:01:10.667259 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 01:01:10.667263 | orchestrator | Monday 09 March 2026 00:58:27 +0000 (0:00:00.622) 0:09:46.498 ********** 2026-03-09 01:01:10.667268 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.667273 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.667277 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.667282 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.667286 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.667291 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.667296 | orchestrator | 2026-03-09 01:01:10.667300 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 01:01:10.667305 | orchestrator | Monday 09 March 2026 00:58:28 +0000 (0:00:00.975) 0:09:47.473 ********** 2026-03-09 01:01:10.667309 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.667314 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.667319 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.667323 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.667328 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.667333 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.667337 | orchestrator | 2026-03-09 01:01:10.667342 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 01:01:10.667347 | orchestrator | Monday 09 March 2026 00:58:29 +0000 (0:00:01.279) 0:09:48.752 ********** 2026-03-09 01:01:10.667351 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.667356 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.667360 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.667365 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.667369 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.667374 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.667379 | orchestrator | 2026-03-09 01:01:10.667384 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 01:01:10.667388 | orchestrator | Monday 09 March 2026 00:58:30 +0000 (0:00:01.502) 0:09:50.254 ********** 2026-03-09 01:01:10.667393 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.667397 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.667402 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.667407 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.667411 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.667416 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.667420 | orchestrator | 2026-03-09 01:01:10.667425 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 01:01:10.667430 | orchestrator | Monday 09 March 2026 00:58:31 +0000 (0:00:00.842) 0:09:51.097 ********** 2026-03-09 01:01:10.667434 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.667439 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.667443 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.667448 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.667453 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.667457 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.667462 | orchestrator | 2026-03-09 01:01:10.667467 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 01:01:10.667471 | orchestrator | Monday 09 March 2026 00:58:32 +0000 (0:00:01.025) 0:09:52.122 ********** 2026-03-09 01:01:10.667476 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.667480 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.667485 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.667489 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.667494 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.667502 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.667506 | orchestrator | 2026-03-09 01:01:10.667511 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 01:01:10.667516 | orchestrator | Monday 09 March 2026 00:58:33 +0000 (0:00:00.668) 0:09:52.791 ********** 2026-03-09 01:01:10.667520 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.667525 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.667530 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.667534 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.667539 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.667543 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.667548 | orchestrator | 2026-03-09 01:01:10.667552 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 01:01:10.667557 | orchestrator | Monday 09 March 2026 00:58:34 +0000 (0:00:00.998) 0:09:53.789 ********** 2026-03-09 01:01:10.667562 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.667566 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.667571 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.667576 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.667580 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.667585 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.667590 | orchestrator | 2026-03-09 01:01:10.667594 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 01:01:10.667599 | orchestrator | Monday 09 March 2026 00:58:35 +0000 (0:00:00.644) 0:09:54.434 ********** 2026-03-09 01:01:10.667603 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.667608 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.667613 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.667617 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.667622 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.667626 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.667631 | orchestrator | 2026-03-09 01:01:10.667635 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 01:01:10.667640 | orchestrator | Monday 09 March 2026 00:58:35 +0000 (0:00:00.750) 0:09:55.184 ********** 2026-03-09 01:01:10.667645 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.667649 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.667654 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.667658 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:10.667666 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:10.667701 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:10.667706 | orchestrator | 2026-03-09 01:01:10.667710 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 01:01:10.667715 | orchestrator | Monday 09 March 2026 00:58:36 +0000 (0:00:00.595) 0:09:55.780 ********** 2026-03-09 01:01:10.667723 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.667728 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.667733 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.667737 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.667742 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.667747 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.667751 | orchestrator | 2026-03-09 01:01:10.667756 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 01:01:10.667761 | orchestrator | Monday 09 March 2026 00:58:37 +0000 (0:00:00.789) 0:09:56.570 ********** 2026-03-09 01:01:10.667766 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.667770 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.667775 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.667779 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.667784 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.667789 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.667793 | orchestrator | 2026-03-09 01:01:10.667798 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 01:01:10.667803 | orchestrator | Monday 09 March 2026 00:58:37 +0000 (0:00:00.579) 0:09:57.149 ********** 2026-03-09 01:01:10.667811 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.667816 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.667820 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.667825 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.667829 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.667834 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.667839 | orchestrator | 2026-03-09 01:01:10.667843 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-09 01:01:10.667848 | orchestrator | Monday 09 March 2026 00:58:39 +0000 (0:00:01.269) 0:09:58.418 ********** 2026-03-09 01:01:10.667853 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:01:10.667858 | orchestrator | 2026-03-09 01:01:10.667863 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-09 01:01:10.667867 | orchestrator | Monday 09 March 2026 00:58:43 +0000 (0:00:04.374) 0:10:02.793 ********** 2026-03-09 01:01:10.667872 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:01:10.667876 | orchestrator | 2026-03-09 01:01:10.667881 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-09 01:01:10.667886 | orchestrator | Monday 09 March 2026 00:58:45 +0000 (0:00:02.038) 0:10:04.831 ********** 2026-03-09 01:01:10.667891 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.667895 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.667900 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.667904 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.667909 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.667914 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.667918 | orchestrator | 2026-03-09 01:01:10.667923 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-09 01:01:10.667928 | orchestrator | Monday 09 March 2026 00:58:47 +0000 (0:00:02.410) 0:10:07.242 ********** 2026-03-09 01:01:10.667933 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.667937 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.667942 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.667946 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.667951 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.667956 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.667960 | orchestrator | 2026-03-09 01:01:10.667965 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-09 01:01:10.667970 | orchestrator | Monday 09 March 2026 00:58:48 +0000 (0:00:01.006) 0:10:08.248 ********** 2026-03-09 01:01:10.667974 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.667980 | orchestrator | 2026-03-09 01:01:10.667985 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-09 01:01:10.667989 | orchestrator | Monday 09 March 2026 00:58:50 +0000 (0:00:01.261) 0:10:09.509 ********** 2026-03-09 01:01:10.667994 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.667999 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.668003 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.668008 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.668012 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.668017 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.668022 | orchestrator | 2026-03-09 01:01:10.668026 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-09 01:01:10.668031 | orchestrator | Monday 09 March 2026 00:58:51 +0000 (0:00:01.636) 0:10:11.146 ********** 2026-03-09 01:01:10.668036 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.668040 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.668045 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.668049 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.668054 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.668059 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.668067 | orchestrator | 2026-03-09 01:01:10.668072 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-09 01:01:10.668077 | orchestrator | Monday 09 March 2026 00:58:55 +0000 (0:00:03.680) 0:10:14.826 ********** 2026-03-09 01:01:10.668082 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:10.668086 | orchestrator | 2026-03-09 01:01:10.668091 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-09 01:01:10.668095 | orchestrator | Monday 09 March 2026 00:58:56 +0000 (0:00:01.279) 0:10:16.105 ********** 2026-03-09 01:01:10.668100 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.668105 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.668109 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.668114 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.668121 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.668126 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.668131 | orchestrator | 2026-03-09 01:01:10.668135 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-09 01:01:10.668140 | orchestrator | Monday 09 March 2026 00:58:57 +0000 (0:00:00.900) 0:10:17.005 ********** 2026-03-09 01:01:10.668147 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.668152 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.668157 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.668161 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:10.668166 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:10.668171 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:10.668175 | orchestrator | 2026-03-09 01:01:10.668180 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-09 01:01:10.668185 | orchestrator | Monday 09 March 2026 00:58:59 +0000 (0:00:02.187) 0:10:19.193 ********** 2026-03-09 01:01:10.668189 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.668194 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.668199 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.668203 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:10.668207 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:10.668211 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:10.668216 | orchestrator | 2026-03-09 01:01:10.668220 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-09 01:01:10.668224 | orchestrator | 2026-03-09 01:01:10.668228 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 01:01:10.668233 | orchestrator | Monday 09 March 2026 00:59:01 +0000 (0:00:01.256) 0:10:20.450 ********** 2026-03-09 01:01:10.668237 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.668241 | orchestrator | 2026-03-09 01:01:10.668246 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 01:01:10.668250 | orchestrator | Monday 09 March 2026 00:59:01 +0000 (0:00:00.680) 0:10:21.130 ********** 2026-03-09 01:01:10.668254 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.668258 | orchestrator | 2026-03-09 01:01:10.668262 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 01:01:10.668266 | orchestrator | Monday 09 March 2026 00:59:02 +0000 (0:00:00.863) 0:10:21.994 ********** 2026-03-09 01:01:10.668271 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.668275 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.668279 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.668283 | orchestrator | 2026-03-09 01:01:10.668287 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 01:01:10.668292 | orchestrator | Monday 09 March 2026 00:59:03 +0000 (0:00:00.339) 0:10:22.333 ********** 2026-03-09 01:01:10.668296 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.668304 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.668308 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.668312 | orchestrator | 2026-03-09 01:01:10.668316 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 01:01:10.668321 | orchestrator | Monday 09 March 2026 00:59:03 +0000 (0:00:00.761) 0:10:23.094 ********** 2026-03-09 01:01:10.668325 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.668329 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.668333 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.668337 | orchestrator | 2026-03-09 01:01:10.668342 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 01:01:10.668346 | orchestrator | Monday 09 March 2026 00:59:04 +0000 (0:00:01.079) 0:10:24.174 ********** 2026-03-09 01:01:10.668350 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.668354 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.668358 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.668363 | orchestrator | 2026-03-09 01:01:10.668367 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 01:01:10.668371 | orchestrator | Monday 09 March 2026 00:59:05 +0000 (0:00:00.799) 0:10:24.973 ********** 2026-03-09 01:01:10.668375 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.668380 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.668384 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.668388 | orchestrator | 2026-03-09 01:01:10.668392 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 01:01:10.668396 | orchestrator | Monday 09 March 2026 00:59:06 +0000 (0:00:00.334) 0:10:25.308 ********** 2026-03-09 01:01:10.668401 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.668405 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.668409 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.668413 | orchestrator | 2026-03-09 01:01:10.668417 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 01:01:10.668421 | orchestrator | Monday 09 March 2026 00:59:06 +0000 (0:00:00.356) 0:10:25.664 ********** 2026-03-09 01:01:10.668426 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.668430 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.668434 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.668438 | orchestrator | 2026-03-09 01:01:10.668442 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 01:01:10.668447 | orchestrator | Monday 09 March 2026 00:59:07 +0000 (0:00:00.665) 0:10:26.330 ********** 2026-03-09 01:01:10.668451 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.668455 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.668459 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.668463 | orchestrator | 2026-03-09 01:01:10.668468 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 01:01:10.668472 | orchestrator | Monday 09 March 2026 00:59:07 +0000 (0:00:00.743) 0:10:27.073 ********** 2026-03-09 01:01:10.668476 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.668480 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.668485 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.668489 | orchestrator | 2026-03-09 01:01:10.668493 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 01:01:10.668497 | orchestrator | Monday 09 March 2026 00:59:08 +0000 (0:00:00.772) 0:10:27.846 ********** 2026-03-09 01:01:10.668502 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.668506 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.668512 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.668517 | orchestrator | 2026-03-09 01:01:10.668521 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 01:01:10.668525 | orchestrator | Monday 09 March 2026 00:59:08 +0000 (0:00:00.320) 0:10:28.167 ********** 2026-03-09 01:01:10.668532 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.668537 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.668541 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.668549 | orchestrator | 2026-03-09 01:01:10.668553 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 01:01:10.668557 | orchestrator | Monday 09 March 2026 00:59:09 +0000 (0:00:00.622) 0:10:28.789 ********** 2026-03-09 01:01:10.668561 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.668566 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.668570 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.668574 | orchestrator | 2026-03-09 01:01:10.668578 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 01:01:10.668583 | orchestrator | Monday 09 March 2026 00:59:09 +0000 (0:00:00.335) 0:10:29.125 ********** 2026-03-09 01:01:10.668587 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.668591 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.668595 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.668599 | orchestrator | 2026-03-09 01:01:10.668604 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 01:01:10.668608 | orchestrator | Monday 09 March 2026 00:59:10 +0000 (0:00:00.353) 0:10:29.478 ********** 2026-03-09 01:01:10.668612 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.668616 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.668620 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.668625 | orchestrator | 2026-03-09 01:01:10.668629 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 01:01:10.668633 | orchestrator | Monday 09 March 2026 00:59:10 +0000 (0:00:00.426) 0:10:29.905 ********** 2026-03-09 01:01:10.668637 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.668642 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.668646 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.668650 | orchestrator | 2026-03-09 01:01:10.668654 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 01:01:10.668659 | orchestrator | Monday 09 March 2026 00:59:11 +0000 (0:00:00.601) 0:10:30.507 ********** 2026-03-09 01:01:10.668666 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.668684 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.668691 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.668698 | orchestrator | 2026-03-09 01:01:10.668705 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 01:01:10.668711 | orchestrator | Monday 09 March 2026 00:59:11 +0000 (0:00:00.306) 0:10:30.814 ********** 2026-03-09 01:01:10.668718 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.668724 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.668732 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.668739 | orchestrator | 2026-03-09 01:01:10.668746 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 01:01:10.668752 | orchestrator | Monday 09 March 2026 00:59:11 +0000 (0:00:00.330) 0:10:31.145 ********** 2026-03-09 01:01:10.668759 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.668766 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.668771 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.668775 | orchestrator | 2026-03-09 01:01:10.668779 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 01:01:10.668783 | orchestrator | Monday 09 March 2026 00:59:12 +0000 (0:00:00.350) 0:10:31.495 ********** 2026-03-09 01:01:10.668787 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.668791 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.668796 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.668800 | orchestrator | 2026-03-09 01:01:10.668804 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-09 01:01:10.668808 | orchestrator | Monday 09 March 2026 00:59:13 +0000 (0:00:00.860) 0:10:32.356 ********** 2026-03-09 01:01:10.668812 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.668816 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.668820 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-09 01:01:10.668824 | orchestrator | 2026-03-09 01:01:10.668828 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-09 01:01:10.668839 | orchestrator | Monday 09 March 2026 00:59:13 +0000 (0:00:00.449) 0:10:32.806 ********** 2026-03-09 01:01:10.668843 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:01:10.668847 | orchestrator | 2026-03-09 01:01:10.668851 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-09 01:01:10.668856 | orchestrator | Monday 09 March 2026 00:59:15 +0000 (0:00:02.202) 0:10:35.008 ********** 2026-03-09 01:01:10.668860 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-09 01:01:10.668866 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.668870 | orchestrator | 2026-03-09 01:01:10.668874 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-09 01:01:10.668878 | orchestrator | Monday 09 March 2026 00:59:15 +0000 (0:00:00.245) 0:10:35.253 ********** 2026-03-09 01:01:10.668884 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 01:01:10.668896 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 01:01:10.668900 | orchestrator | 2026-03-09 01:01:10.668904 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-09 01:01:10.668912 | orchestrator | Monday 09 March 2026 00:59:24 +0000 (0:00:08.563) 0:10:43.817 ********** 2026-03-09 01:01:10.668916 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:01:10.668920 | orchestrator | 2026-03-09 01:01:10.668925 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-09 01:01:10.668929 | orchestrator | Monday 09 March 2026 00:59:28 +0000 (0:00:03.789) 0:10:47.606 ********** 2026-03-09 01:01:10.668933 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.668937 | orchestrator | 2026-03-09 01:01:10.668941 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-09 01:01:10.668945 | orchestrator | Monday 09 March 2026 00:59:28 +0000 (0:00:00.645) 0:10:48.251 ********** 2026-03-09 01:01:10.668949 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-09 01:01:10.668954 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-09 01:01:10.668958 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-09 01:01:10.668962 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-09 01:01:10.668966 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-09 01:01:10.668970 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-09 01:01:10.668974 | orchestrator | 2026-03-09 01:01:10.668979 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-09 01:01:10.668983 | orchestrator | Monday 09 March 2026 00:59:30 +0000 (0:00:01.290) 0:10:49.542 ********** 2026-03-09 01:01:10.668987 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:01:10.668991 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 01:01:10.668995 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 01:01:10.668999 | orchestrator | 2026-03-09 01:01:10.669004 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-09 01:01:10.669008 | orchestrator | Monday 09 March 2026 00:59:32 +0000 (0:00:02.459) 0:10:52.001 ********** 2026-03-09 01:01:10.669015 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 01:01:10.669020 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 01:01:10.669024 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.669028 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 01:01:10.669032 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-09 01:01:10.669036 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.669040 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 01:01:10.669045 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-09 01:01:10.669049 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.669053 | orchestrator | 2026-03-09 01:01:10.669057 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-09 01:01:10.669061 | orchestrator | Monday 09 March 2026 00:59:34 +0000 (0:00:01.761) 0:10:53.762 ********** 2026-03-09 01:01:10.669065 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.669070 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.669074 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.669078 | orchestrator | 2026-03-09 01:01:10.669082 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-09 01:01:10.669086 | orchestrator | Monday 09 March 2026 00:59:37 +0000 (0:00:03.108) 0:10:56.871 ********** 2026-03-09 01:01:10.669090 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.669095 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.669099 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.669103 | orchestrator | 2026-03-09 01:01:10.669107 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-09 01:01:10.669111 | orchestrator | Monday 09 March 2026 00:59:37 +0000 (0:00:00.348) 0:10:57.219 ********** 2026-03-09 01:01:10.669115 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.669119 | orchestrator | 2026-03-09 01:01:10.669124 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-09 01:01:10.669128 | orchestrator | Monday 09 March 2026 00:59:38 +0000 (0:00:00.907) 0:10:58.126 ********** 2026-03-09 01:01:10.669132 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.669136 | orchestrator | 2026-03-09 01:01:10.669140 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-09 01:01:10.669145 | orchestrator | Monday 09 March 2026 00:59:39 +0000 (0:00:00.714) 0:10:58.841 ********** 2026-03-09 01:01:10.669149 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.669153 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.669157 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.669161 | orchestrator | 2026-03-09 01:01:10.669165 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-09 01:01:10.669169 | orchestrator | Monday 09 March 2026 00:59:41 +0000 (0:00:01.565) 0:11:00.406 ********** 2026-03-09 01:01:10.669174 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.669178 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.669182 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.669186 | orchestrator | 2026-03-09 01:01:10.669190 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-09 01:01:10.669194 | orchestrator | Monday 09 March 2026 00:59:42 +0000 (0:00:01.758) 0:11:02.165 ********** 2026-03-09 01:01:10.669199 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.669203 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.669210 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.669214 | orchestrator | 2026-03-09 01:01:10.669218 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-09 01:01:10.669223 | orchestrator | Monday 09 March 2026 00:59:45 +0000 (0:00:02.275) 0:11:04.441 ********** 2026-03-09 01:01:10.669227 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.669236 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.669240 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.669245 | orchestrator | 2026-03-09 01:01:10.669249 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-09 01:01:10.669253 | orchestrator | Monday 09 March 2026 00:59:47 +0000 (0:00:02.383) 0:11:06.824 ********** 2026-03-09 01:01:10.669257 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.669261 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.669265 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.669270 | orchestrator | 2026-03-09 01:01:10.669274 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 01:01:10.669278 | orchestrator | Monday 09 March 2026 00:59:48 +0000 (0:00:01.403) 0:11:08.227 ********** 2026-03-09 01:01:10.669282 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.669286 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.669290 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.669295 | orchestrator | 2026-03-09 01:01:10.669299 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-09 01:01:10.669303 | orchestrator | Monday 09 March 2026 00:59:49 +0000 (0:00:00.640) 0:11:08.867 ********** 2026-03-09 01:01:10.669307 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.669311 | orchestrator | 2026-03-09 01:01:10.669316 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-09 01:01:10.669320 | orchestrator | Monday 09 March 2026 00:59:50 +0000 (0:00:00.789) 0:11:09.657 ********** 2026-03-09 01:01:10.669324 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.669328 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.669332 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.669336 | orchestrator | 2026-03-09 01:01:10.669341 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-09 01:01:10.669345 | orchestrator | Monday 09 March 2026 00:59:50 +0000 (0:00:00.429) 0:11:10.087 ********** 2026-03-09 01:01:10.669349 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.669353 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.669357 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.669361 | orchestrator | 2026-03-09 01:01:10.669365 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-09 01:01:10.669369 | orchestrator | Monday 09 March 2026 00:59:51 +0000 (0:00:01.131) 0:11:11.219 ********** 2026-03-09 01:01:10.669374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:01:10.669378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:01:10.669382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:01:10.669386 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.669390 | orchestrator | 2026-03-09 01:01:10.669394 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-09 01:01:10.669399 | orchestrator | Monday 09 March 2026 00:59:52 +0000 (0:00:01.007) 0:11:12.226 ********** 2026-03-09 01:01:10.669403 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.669407 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.669411 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.669415 | orchestrator | 2026-03-09 01:01:10.669419 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-09 01:01:10.669424 | orchestrator | 2026-03-09 01:01:10.669428 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 01:01:10.669432 | orchestrator | Monday 09 March 2026 00:59:53 +0000 (0:00:00.735) 0:11:12.961 ********** 2026-03-09 01:01:10.669436 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.669440 | orchestrator | 2026-03-09 01:01:10.669445 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 01:01:10.669449 | orchestrator | Monday 09 March 2026 00:59:54 +0000 (0:00:00.493) 0:11:13.455 ********** 2026-03-09 01:01:10.669456 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.669460 | orchestrator | 2026-03-09 01:01:10.669464 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 01:01:10.669468 | orchestrator | Monday 09 March 2026 00:59:55 +0000 (0:00:00.825) 0:11:14.281 ********** 2026-03-09 01:01:10.669473 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.669477 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.669481 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.669485 | orchestrator | 2026-03-09 01:01:10.669489 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 01:01:10.669493 | orchestrator | Monday 09 March 2026 00:59:55 +0000 (0:00:00.410) 0:11:14.691 ********** 2026-03-09 01:01:10.669498 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.669502 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.669506 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.669510 | orchestrator | 2026-03-09 01:01:10.669514 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 01:01:10.669518 | orchestrator | Monday 09 March 2026 00:59:56 +0000 (0:00:00.691) 0:11:15.383 ********** 2026-03-09 01:01:10.669523 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.669527 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.669531 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.669535 | orchestrator | 2026-03-09 01:01:10.669539 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 01:01:10.669543 | orchestrator | Monday 09 March 2026 00:59:57 +0000 (0:00:00.994) 0:11:16.377 ********** 2026-03-09 01:01:10.669548 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.669552 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.669556 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.669560 | orchestrator | 2026-03-09 01:01:10.669566 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 01:01:10.669571 | orchestrator | Monday 09 March 2026 00:59:57 +0000 (0:00:00.761) 0:11:17.139 ********** 2026-03-09 01:01:10.669575 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.669579 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.669583 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.669587 | orchestrator | 2026-03-09 01:01:10.669594 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 01:01:10.669599 | orchestrator | Monday 09 March 2026 00:59:58 +0000 (0:00:00.339) 0:11:17.478 ********** 2026-03-09 01:01:10.669603 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.669607 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.669611 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.669615 | orchestrator | 2026-03-09 01:01:10.669620 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 01:01:10.669624 | orchestrator | Monday 09 March 2026 00:59:58 +0000 (0:00:00.368) 0:11:17.847 ********** 2026-03-09 01:01:10.669628 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.669632 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.669636 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.669640 | orchestrator | 2026-03-09 01:01:10.669645 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 01:01:10.669649 | orchestrator | Monday 09 March 2026 00:59:59 +0000 (0:00:00.607) 0:11:18.454 ********** 2026-03-09 01:01:10.669653 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.669657 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.669661 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.669665 | orchestrator | 2026-03-09 01:01:10.669695 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 01:01:10.669700 | orchestrator | Monday 09 March 2026 00:59:59 +0000 (0:00:00.756) 0:11:19.211 ********** 2026-03-09 01:01:10.669705 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.669709 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.669716 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.669721 | orchestrator | 2026-03-09 01:01:10.669725 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 01:01:10.669729 | orchestrator | Monday 09 March 2026 01:00:00 +0000 (0:00:00.797) 0:11:20.008 ********** 2026-03-09 01:01:10.669733 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.669737 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.669742 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.669746 | orchestrator | 2026-03-09 01:01:10.669750 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 01:01:10.669755 | orchestrator | Monday 09 March 2026 01:00:01 +0000 (0:00:00.360) 0:11:20.369 ********** 2026-03-09 01:01:10.669759 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.669763 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.669767 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.669771 | orchestrator | 2026-03-09 01:01:10.669775 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 01:01:10.669780 | orchestrator | Monday 09 March 2026 01:00:01 +0000 (0:00:00.776) 0:11:21.146 ********** 2026-03-09 01:01:10.669784 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.669788 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.669792 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.669796 | orchestrator | 2026-03-09 01:01:10.669801 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 01:01:10.669805 | orchestrator | Monday 09 March 2026 01:00:02 +0000 (0:00:00.452) 0:11:21.598 ********** 2026-03-09 01:01:10.669809 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.669813 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.669817 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.669821 | orchestrator | 2026-03-09 01:01:10.669826 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 01:01:10.669830 | orchestrator | Monday 09 March 2026 01:00:02 +0000 (0:00:00.515) 0:11:22.113 ********** 2026-03-09 01:01:10.669834 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.669838 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.669842 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.669846 | orchestrator | 2026-03-09 01:01:10.669850 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 01:01:10.669855 | orchestrator | Monday 09 March 2026 01:00:03 +0000 (0:00:00.403) 0:11:22.516 ********** 2026-03-09 01:01:10.669859 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.669863 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.669867 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.669871 | orchestrator | 2026-03-09 01:01:10.669876 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 01:01:10.669880 | orchestrator | Monday 09 March 2026 01:00:03 +0000 (0:00:00.658) 0:11:23.175 ********** 2026-03-09 01:01:10.669884 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.669888 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.669892 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.669896 | orchestrator | 2026-03-09 01:01:10.669900 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 01:01:10.669905 | orchestrator | Monday 09 March 2026 01:00:04 +0000 (0:00:00.380) 0:11:23.556 ********** 2026-03-09 01:01:10.669909 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.669913 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.669917 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.669921 | orchestrator | 2026-03-09 01:01:10.669925 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 01:01:10.669930 | orchestrator | Monday 09 March 2026 01:00:04 +0000 (0:00:00.420) 0:11:23.976 ********** 2026-03-09 01:01:10.669934 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.669938 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.669942 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.669946 | orchestrator | 2026-03-09 01:01:10.669954 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 01:01:10.669958 | orchestrator | Monday 09 March 2026 01:00:05 +0000 (0:00:00.355) 0:11:24.331 ********** 2026-03-09 01:01:10.669962 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.669967 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.669971 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.669975 | orchestrator | 2026-03-09 01:01:10.669979 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-09 01:01:10.669986 | orchestrator | Monday 09 March 2026 01:00:05 +0000 (0:00:00.877) 0:11:25.209 ********** 2026-03-09 01:01:10.669990 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.669994 | orchestrator | 2026-03-09 01:01:10.669999 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-09 01:01:10.670005 | orchestrator | Monday 09 March 2026 01:00:06 +0000 (0:00:00.606) 0:11:25.815 ********** 2026-03-09 01:01:10.670010 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:01:10.670031 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 01:01:10.670036 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 01:01:10.670040 | orchestrator | 2026-03-09 01:01:10.670044 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-09 01:01:10.670049 | orchestrator | Monday 09 March 2026 01:00:08 +0000 (0:00:02.234) 0:11:28.050 ********** 2026-03-09 01:01:10.670053 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 01:01:10.670057 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 01:01:10.670061 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.670066 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 01:01:10.670070 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-09 01:01:10.670074 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.670079 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 01:01:10.670083 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-09 01:01:10.670087 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.670091 | orchestrator | 2026-03-09 01:01:10.670096 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-09 01:01:10.670100 | orchestrator | Monday 09 March 2026 01:00:10 +0000 (0:00:01.537) 0:11:29.587 ********** 2026-03-09 01:01:10.670104 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.670108 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.670113 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.670117 | orchestrator | 2026-03-09 01:01:10.670121 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-09 01:01:10.670125 | orchestrator | Monday 09 March 2026 01:00:10 +0000 (0:00:00.356) 0:11:29.944 ********** 2026-03-09 01:01:10.670129 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.670134 | orchestrator | 2026-03-09 01:01:10.670138 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-09 01:01:10.670142 | orchestrator | Monday 09 March 2026 01:00:11 +0000 (0:00:00.603) 0:11:30.547 ********** 2026-03-09 01:01:10.670146 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-09 01:01:10.670151 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-09 01:01:10.670155 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-09 01:01:10.670159 | orchestrator | 2026-03-09 01:01:10.670163 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-09 01:01:10.670171 | orchestrator | Monday 09 March 2026 01:00:12 +0000 (0:00:01.463) 0:11:32.011 ********** 2026-03-09 01:01:10.670175 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:01:10.670179 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-09 01:01:10.670184 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:01:10.670188 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-09 01:01:10.670192 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:01:10.670196 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-09 01:01:10.670201 | orchestrator | 2026-03-09 01:01:10.670205 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-09 01:01:10.670209 | orchestrator | Monday 09 March 2026 01:00:17 +0000 (0:00:04.906) 0:11:36.918 ********** 2026-03-09 01:01:10.670213 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:01:10.670217 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 01:01:10.670222 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:01:10.670226 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 01:01:10.670230 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:01:10.670234 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 01:01:10.670238 | orchestrator | 2026-03-09 01:01:10.670243 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-09 01:01:10.670247 | orchestrator | Monday 09 March 2026 01:00:20 +0000 (0:00:02.434) 0:11:39.352 ********** 2026-03-09 01:01:10.670251 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 01:01:10.670255 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.670260 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 01:01:10.670267 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.670271 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 01:01:10.670275 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.670279 | orchestrator | 2026-03-09 01:01:10.670283 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-09 01:01:10.670289 | orchestrator | Monday 09 March 2026 01:00:21 +0000 (0:00:01.276) 0:11:40.629 ********** 2026-03-09 01:01:10.670293 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-09 01:01:10.670297 | orchestrator | 2026-03-09 01:01:10.670301 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-09 01:01:10.670305 | orchestrator | Monday 09 March 2026 01:00:21 +0000 (0:00:00.238) 0:11:40.867 ********** 2026-03-09 01:01:10.670309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 01:01:10.670313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 01:01:10.670317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 01:01:10.670321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 01:01:10.670325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 01:01:10.670329 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.670333 | orchestrator | 2026-03-09 01:01:10.670341 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-09 01:01:10.670345 | orchestrator | Monday 09 March 2026 01:00:22 +0000 (0:00:01.254) 0:11:42.122 ********** 2026-03-09 01:01:10.670348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 01:01:10.670352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 01:01:10.670356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 01:01:10.670360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 01:01:10.670364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 01:01:10.670368 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.670372 | orchestrator | 2026-03-09 01:01:10.670375 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-09 01:01:10.670379 | orchestrator | Monday 09 March 2026 01:00:23 +0000 (0:00:00.702) 0:11:42.825 ********** 2026-03-09 01:01:10.670383 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-09 01:01:10.670387 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-09 01:01:10.670391 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-09 01:01:10.670395 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-09 01:01:10.670399 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-09 01:01:10.670402 | orchestrator | 2026-03-09 01:01:10.670406 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-09 01:01:10.670410 | orchestrator | Monday 09 March 2026 01:00:55 +0000 (0:00:31.879) 0:12:14.704 ********** 2026-03-09 01:01:10.670414 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.670418 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.670422 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.670425 | orchestrator | 2026-03-09 01:01:10.670429 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-09 01:01:10.670433 | orchestrator | Monday 09 March 2026 01:00:55 +0000 (0:00:00.347) 0:12:15.052 ********** 2026-03-09 01:01:10.670437 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.670441 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.670445 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.670448 | orchestrator | 2026-03-09 01:01:10.670452 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-09 01:01:10.670456 | orchestrator | Monday 09 March 2026 01:00:56 +0000 (0:00:00.309) 0:12:15.361 ********** 2026-03-09 01:01:10.670460 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.670464 | orchestrator | 2026-03-09 01:01:10.670471 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-09 01:01:10.670477 | orchestrator | Monday 09 March 2026 01:00:56 +0000 (0:00:00.905) 0:12:16.267 ********** 2026-03-09 01:01:10.670481 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.670485 | orchestrator | 2026-03-09 01:01:10.670495 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-09 01:01:10.670502 | orchestrator | Monday 09 March 2026 01:00:57 +0000 (0:00:00.587) 0:12:16.855 ********** 2026-03-09 01:01:10.670507 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.670513 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.670519 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.670524 | orchestrator | 2026-03-09 01:01:10.670529 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-09 01:01:10.670535 | orchestrator | Monday 09 March 2026 01:00:58 +0000 (0:00:01.281) 0:12:18.137 ********** 2026-03-09 01:01:10.670541 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.670547 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.670553 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.670560 | orchestrator | 2026-03-09 01:01:10.670564 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-09 01:01:10.670568 | orchestrator | Monday 09 March 2026 01:01:00 +0000 (0:00:01.574) 0:12:19.712 ********** 2026-03-09 01:01:10.670572 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:01:10.670575 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:01:10.670579 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:01:10.670583 | orchestrator | 2026-03-09 01:01:10.670587 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-09 01:01:10.670590 | orchestrator | Monday 09 March 2026 01:01:02 +0000 (0:00:01.830) 0:12:21.542 ********** 2026-03-09 01:01:10.670594 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-09 01:01:10.670598 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-09 01:01:10.670602 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-09 01:01:10.670606 | orchestrator | 2026-03-09 01:01:10.670609 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 01:01:10.670613 | orchestrator | Monday 09 March 2026 01:01:05 +0000 (0:00:02.816) 0:12:24.359 ********** 2026-03-09 01:01:10.670617 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.670621 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.670624 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.670628 | orchestrator | 2026-03-09 01:01:10.670632 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-09 01:01:10.670636 | orchestrator | Monday 09 March 2026 01:01:05 +0000 (0:00:00.401) 0:12:24.760 ********** 2026-03-09 01:01:10.670640 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:01:10.670644 | orchestrator | 2026-03-09 01:01:10.670647 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-09 01:01:10.670651 | orchestrator | Monday 09 March 2026 01:01:06 +0000 (0:00:00.569) 0:12:25.330 ********** 2026-03-09 01:01:10.670655 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.670659 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.670663 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.670666 | orchestrator | 2026-03-09 01:01:10.670687 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-09 01:01:10.670693 | orchestrator | Monday 09 March 2026 01:01:06 +0000 (0:00:00.630) 0:12:25.960 ********** 2026-03-09 01:01:10.670700 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.670706 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:01:10.670712 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:01:10.670718 | orchestrator | 2026-03-09 01:01:10.670724 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-09 01:01:10.670728 | orchestrator | Monday 09 March 2026 01:01:07 +0000 (0:00:00.400) 0:12:26.361 ********** 2026-03-09 01:01:10.670732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:01:10.670742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:01:10.670746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:01:10.670750 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:01:10.670754 | orchestrator | 2026-03-09 01:01:10.670757 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-09 01:01:10.670761 | orchestrator | Monday 09 March 2026 01:01:07 +0000 (0:00:00.624) 0:12:26.986 ********** 2026-03-09 01:01:10.670765 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:01:10.670769 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:01:10.670773 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:01:10.670777 | orchestrator | 2026-03-09 01:01:10.670780 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:01:10.670784 | orchestrator | testbed-node-0 : ok=134  changed=34  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-09 01:01:10.670789 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-09 01:01:10.670793 | orchestrator | testbed-node-2 : ok=134  changed=34  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-09 01:01:10.670796 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-09 01:01:10.670803 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-09 01:01:10.670812 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-09 01:01:10.670816 | orchestrator | 2026-03-09 01:01:10.670820 | orchestrator | 2026-03-09 01:01:10.670823 | orchestrator | 2026-03-09 01:01:10.670827 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:01:10.670831 | orchestrator | Monday 09 March 2026 01:01:07 +0000 (0:00:00.273) 0:12:27.259 ********** 2026-03-09 01:01:10.670835 | orchestrator | =============================================================================== 2026-03-09 01:01:10.670839 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 42.22s 2026-03-09 01:01:10.670843 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.47s 2026-03-09 01:01:10.670846 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.88s 2026-03-09 01:01:10.670850 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.52s 2026-03-09 01:01:10.670854 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.10s 2026-03-09 01:01:10.670858 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.23s 2026-03-09 01:01:10.670862 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.69s 2026-03-09 01:01:10.670866 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.97s 2026-03-09 01:01:10.670869 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.71s 2026-03-09 01:01:10.670873 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.56s 2026-03-09 01:01:10.670877 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.51s 2026-03-09 01:01:10.670881 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.72s 2026-03-09 01:01:10.670884 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 6.31s 2026-03-09 01:01:10.670888 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 6.03s 2026-03-09 01:01:10.670892 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.41s 2026-03-09 01:01:10.670896 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.91s 2026-03-09 01:01:10.670903 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 4.39s 2026-03-09 01:01:10.670909 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.37s 2026-03-09 01:01:10.670915 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.02s 2026-03-09 01:01:10.670921 | orchestrator | ceph-container-common : Enable ceph.target ------------------------------ 3.95s 2026-03-09 01:01:10.670927 | orchestrator | 2026-03-09 01:01:10 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:10.670933 | orchestrator | 2026-03-09 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:13.704013 | orchestrator | 2026-03-09 01:01:13 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:13.705972 | orchestrator | 2026-03-09 01:01:13 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:13.708780 | orchestrator | 2026-03-09 01:01:13 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:13.708869 | orchestrator | 2026-03-09 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:16.747164 | orchestrator | 2026-03-09 01:01:16 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:16.748762 | orchestrator | 2026-03-09 01:01:16 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:16.751544 | orchestrator | 2026-03-09 01:01:16 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:16.751992 | orchestrator | 2026-03-09 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:19.802679 | orchestrator | 2026-03-09 01:01:19 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:19.805410 | orchestrator | 2026-03-09 01:01:19 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:19.807084 | orchestrator | 2026-03-09 01:01:19 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:19.807451 | orchestrator | 2026-03-09 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:22.861597 | orchestrator | 2026-03-09 01:01:22 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:22.864624 | orchestrator | 2026-03-09 01:01:22 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:22.867234 | orchestrator | 2026-03-09 01:01:22 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:22.867956 | orchestrator | 2026-03-09 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:25.912968 | orchestrator | 2026-03-09 01:01:25 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:25.914919 | orchestrator | 2026-03-09 01:01:25 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:25.917049 | orchestrator | 2026-03-09 01:01:25 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:25.917110 | orchestrator | 2026-03-09 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:28.954626 | orchestrator | 2026-03-09 01:01:28 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:28.956447 | orchestrator | 2026-03-09 01:01:28 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:28.958922 | orchestrator | 2026-03-09 01:01:28 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:28.959033 | orchestrator | 2026-03-09 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:32.001447 | orchestrator | 2026-03-09 01:01:31 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:32.002371 | orchestrator | 2026-03-09 01:01:32 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:32.003584 | orchestrator | 2026-03-09 01:01:32 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:32.004167 | orchestrator | 2026-03-09 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:35.065829 | orchestrator | 2026-03-09 01:01:35 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:35.068397 | orchestrator | 2026-03-09 01:01:35 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:35.071966 | orchestrator | 2026-03-09 01:01:35 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:35.072055 | orchestrator | 2026-03-09 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:38.119954 | orchestrator | 2026-03-09 01:01:38 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:38.120742 | orchestrator | 2026-03-09 01:01:38 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:38.121546 | orchestrator | 2026-03-09 01:01:38 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:38.121580 | orchestrator | 2026-03-09 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:41.167524 | orchestrator | 2026-03-09 01:01:41 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:41.169333 | orchestrator | 2026-03-09 01:01:41 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:41.171340 | orchestrator | 2026-03-09 01:01:41 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:41.171417 | orchestrator | 2026-03-09 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:44.226880 | orchestrator | 2026-03-09 01:01:44 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:44.229408 | orchestrator | 2026-03-09 01:01:44 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:44.232069 | orchestrator | 2026-03-09 01:01:44 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:44.232164 | orchestrator | 2026-03-09 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:47.282793 | orchestrator | 2026-03-09 01:01:47 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:47.283880 | orchestrator | 2026-03-09 01:01:47 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:47.286236 | orchestrator | 2026-03-09 01:01:47 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:47.286324 | orchestrator | 2026-03-09 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:50.338732 | orchestrator | 2026-03-09 01:01:50 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:50.340048 | orchestrator | 2026-03-09 01:01:50 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:50.341467 | orchestrator | 2026-03-09 01:01:50 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:50.341811 | orchestrator | 2026-03-09 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:53.395957 | orchestrator | 2026-03-09 01:01:53 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:53.397141 | orchestrator | 2026-03-09 01:01:53 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:53.398978 | orchestrator | 2026-03-09 01:01:53 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:53.399021 | orchestrator | 2026-03-09 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:56.468591 | orchestrator | 2026-03-09 01:01:56 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:56.470713 | orchestrator | 2026-03-09 01:01:56 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:56.472599 | orchestrator | 2026-03-09 01:01:56 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:56.472635 | orchestrator | 2026-03-09 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:59.529643 | orchestrator | 2026-03-09 01:01:59 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:01:59.530291 | orchestrator | 2026-03-09 01:01:59 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:01:59.532074 | orchestrator | 2026-03-09 01:01:59 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:01:59.532120 | orchestrator | 2026-03-09 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:02.579686 | orchestrator | 2026-03-09 01:02:02 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:02.583131 | orchestrator | 2026-03-09 01:02:02 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:02:02.585883 | orchestrator | 2026-03-09 01:02:02 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state STARTED 2026-03-09 01:02:02.586409 | orchestrator | 2026-03-09 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:05.625250 | orchestrator | 2026-03-09 01:02:05 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:05.627031 | orchestrator | 2026-03-09 01:02:05 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:02:05.629550 | orchestrator | 2026-03-09 01:02:05 | INFO  | Task 14881c17-7ad6-479a-9b03-c125c0b4f4fe is in state SUCCESS 2026-03-09 01:02:05.631661 | orchestrator | 2026-03-09 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:05.634515 | orchestrator | 2026-03-09 01:02:05.634574 | orchestrator | 2026-03-09 01:02:05.634586 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:02:05.634597 | orchestrator | 2026-03-09 01:02:05.634607 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:02:05.634618 | orchestrator | Monday 09 March 2026 00:59:11 +0000 (0:00:00.305) 0:00:00.305 ********** 2026-03-09 01:02:05.634629 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:02:05.634640 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:02:05.634650 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:02:05.634660 | orchestrator | 2026-03-09 01:02:05.634670 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:02:05.634680 | orchestrator | Monday 09 March 2026 00:59:11 +0000 (0:00:00.301) 0:00:00.607 ********** 2026-03-09 01:02:05.634689 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-09 01:02:05.634698 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-09 01:02:05.634851 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-09 01:02:05.634862 | orchestrator | 2026-03-09 01:02:05.634870 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-09 01:02:05.634900 | orchestrator | 2026-03-09 01:02:05.634909 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-09 01:02:05.634917 | orchestrator | Monday 09 March 2026 00:59:12 +0000 (0:00:00.467) 0:00:01.074 ********** 2026-03-09 01:02:05.634926 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:02:05.634934 | orchestrator | 2026-03-09 01:02:05.634942 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-09 01:02:05.634950 | orchestrator | Monday 09 March 2026 00:59:12 +0000 (0:00:00.530) 0:00:01.605 ********** 2026-03-09 01:02:05.634958 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 01:02:05.634967 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 01:02:05.634975 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 01:02:05.634983 | orchestrator | 2026-03-09 01:02:05.634991 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-09 01:02:05.634999 | orchestrator | Monday 09 March 2026 00:59:14 +0000 (0:00:01.692) 0:00:03.297 ********** 2026-03-09 01:02:05.635022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 01:02:05.635035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 01:02:05.635055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 01:02:05.635066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 01:02:05.635087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 01:02:05.635097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 01:02:05.635106 | orchestrator | 2026-03-09 01:02:05.635114 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-09 01:02:05.635122 | orchestrator | Monday 09 March 2026 00:59:16 +0000 (0:00:01.984) 0:00:05.282 ********** 2026-03-09 01:02:05.635131 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:02:05.635139 | orchestrator | 2026-03-09 01:02:05.635147 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-09 01:02:05.635155 | orchestrator | Monday 09 March 2026 00:59:17 +0000 (0:00:00.664) 0:00:05.946 ********** 2026-03-09 01:02:05.635172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 01:02:05.635189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 01:02:05.635202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 01:02:05.635211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 01:02:05.635226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 01:02:05.635244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 01:02:05.635253 | orchestrator | 2026-03-09 01:02:05.635261 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-09 01:02:05.635269 | orchestrator | Monday 09 March 2026 00:59:20 +0000 (0:00:03.048) 0:00:08.994 ********** 2026-03-09 01:02:05.635282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 01:02:05.635291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 01:02:05.635300 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:05.635314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 01:02:05.635328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 01:02:05.635337 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:05.635349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 01:02:05.635358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 01:02:05.635366 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:05.635374 | orchestrator | 2026-03-09 01:02:05.635382 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-09 01:02:05.635391 | orchestrator | Monday 09 March 2026 00:59:21 +0000 (0:00:01.329) 0:00:10.324 ********** 2026-03-09 01:02:05.635415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 01:02:05.635430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 01:02:05.635447 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:05.635484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 01:02:05.635499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 01:02:05.635513 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:05.635543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 01:02:05.635557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 01:02:05.635570 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:05.635584 | orchestrator | 2026-03-09 01:02:05.635596 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-09 01:02:05.635609 | orchestrator | Monday 09 March 2026 00:59:22 +0000 (0:00:00.956) 0:00:11.281 ********** 2026-03-09 01:02:05.635629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 01:02:05.635643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 01:02:05.635666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 01:02:05.635690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 01:02:05.635705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 01:02:05.635715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 01:02:05.635729 | orchestrator | 2026-03-09 01:02:05.635738 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-09 01:02:05.635746 | orchestrator | Monday 09 March 2026 00:59:25 +0000 (0:00:02.908) 0:00:14.190 ********** 2026-03-09 01:02:05.635754 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:02:05.635763 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:05.635771 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:02:05.635779 | orchestrator | 2026-03-09 01:02:05.635806 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-09 01:02:05.635814 | orchestrator | Monday 09 March 2026 00:59:28 +0000 (0:00:02.736) 0:00:16.926 ********** 2026-03-09 01:02:05.635823 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:05.635831 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:02:05.635839 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:02:05.635847 | orchestrator | 2026-03-09 01:02:05.635855 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-09 01:02:05.635864 | orchestrator | Monday 09 March 2026 00:59:30 +0000 (0:00:02.825) 0:00:19.752 ********** 2026-03-09 01:02:05.635879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 01:02:05.635889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 01:02:05.635901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 01:02:05.635910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 01:02:05.635930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 01:02:05.635940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 01:02:05.635949 | orchestrator | 2026-03-09 01:02:05.635957 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-09 01:02:05.635965 | orchestrator | Monday 09 March 2026 00:59:32 +0000 (0:00:02.077) 0:00:21.829 ********** 2026-03-09 01:02:05.635973 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:05.635981 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:05.635989 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:05.635997 | orchestrator | 2026-03-09 01:02:05.636005 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-09 01:02:05.636014 | orchestrator | Monday 09 March 2026 00:59:33 +0000 (0:00:00.571) 0:00:22.401 ********** 2026-03-09 01:02:05.636022 | orchestrator | 2026-03-09 01:02:05.636034 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-09 01:02:05.636042 | orchestrator | Monday 09 March 2026 00:59:33 +0000 (0:00:00.087) 0:00:22.489 ********** 2026-03-09 01:02:05.636050 | orchestrator | 2026-03-09 01:02:05.636058 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-09 01:02:05.636071 | orchestrator | Monday 09 March 2026 00:59:33 +0000 (0:00:00.073) 0:00:22.562 ********** 2026-03-09 01:02:05.636079 | orchestrator | 2026-03-09 01:02:05.636087 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-09 01:02:05.636095 | orchestrator | Monday 09 March 2026 00:59:33 +0000 (0:00:00.074) 0:00:22.637 ********** 2026-03-09 01:02:05.636103 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:05.636111 | orchestrator | 2026-03-09 01:02:05.636119 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-09 01:02:05.636127 | orchestrator | Monday 09 March 2026 00:59:34 +0000 (0:00:01.053) 0:00:23.690 ********** 2026-03-09 01:02:05.636135 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:05.636143 | orchestrator | 2026-03-09 01:02:05.636151 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-09 01:02:05.636160 | orchestrator | Monday 09 March 2026 00:59:35 +0000 (0:00:00.255) 0:00:23.946 ********** 2026-03-09 01:02:05.636168 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:05.636176 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:02:05.636184 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:02:05.636192 | orchestrator | 2026-03-09 01:02:05.636200 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-09 01:02:05.636208 | orchestrator | Monday 09 March 2026 01:00:27 +0000 (0:00:52.762) 0:01:16.709 ********** 2026-03-09 01:02:05.636216 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:05.636224 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:02:05.636232 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:02:05.636240 | orchestrator | 2026-03-09 01:02:05.636248 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-09 01:02:05.636256 | orchestrator | Monday 09 March 2026 01:01:51 +0000 (0:01:24.033) 0:02:40.743 ********** 2026-03-09 01:02:05.636264 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:02:05.636272 | orchestrator | 2026-03-09 01:02:05.636280 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-09 01:02:05.636288 | orchestrator | Monday 09 March 2026 01:01:52 +0000 (0:00:00.793) 0:02:41.536 ********** 2026-03-09 01:02:05.636296 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:02:05.636305 | orchestrator | 2026-03-09 01:02:05.636313 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-09 01:02:05.636321 | orchestrator | Monday 09 March 2026 01:01:55 +0000 (0:00:02.577) 0:02:44.114 ********** 2026-03-09 01:02:05.636329 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:02:05.636337 | orchestrator | 2026-03-09 01:02:05.636346 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-09 01:02:05.636354 | orchestrator | Monday 09 March 2026 01:01:57 +0000 (0:00:02.405) 0:02:46.519 ********** 2026-03-09 01:02:05.636362 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:05.636370 | orchestrator | 2026-03-09 01:02:05.636378 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-09 01:02:05.636386 | orchestrator | Monday 09 March 2026 01:02:00 +0000 (0:00:02.964) 0:02:49.484 ********** 2026-03-09 01:02:05.636394 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:05.636402 | orchestrator | 2026-03-09 01:02:05.636414 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:02:05.636424 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 01:02:05.636434 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 01:02:05.636443 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 01:02:05.636451 | orchestrator | 2026-03-09 01:02:05.636464 | orchestrator | 2026-03-09 01:02:05.636472 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:02:05.636481 | orchestrator | Monday 09 March 2026 01:02:03 +0000 (0:00:02.425) 0:02:51.909 ********** 2026-03-09 01:02:05.636489 | orchestrator | =============================================================================== 2026-03-09 01:02:05.636497 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 84.03s 2026-03-09 01:02:05.636505 | orchestrator | opensearch : Restart opensearch container ------------------------------ 52.76s 2026-03-09 01:02:05.636513 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.05s 2026-03-09 01:02:05.636521 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.96s 2026-03-09 01:02:05.636529 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.91s 2026-03-09 01:02:05.636541 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.83s 2026-03-09 01:02:05.636555 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.74s 2026-03-09 01:02:05.636568 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.58s 2026-03-09 01:02:05.636581 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.43s 2026-03-09 01:02:05.636594 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.41s 2026-03-09 01:02:05.636608 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.08s 2026-03-09 01:02:05.636628 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.98s 2026-03-09 01:02:05.636643 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.69s 2026-03-09 01:02:05.636655 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.33s 2026-03-09 01:02:05.636663 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 1.05s 2026-03-09 01:02:05.636671 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.96s 2026-03-09 01:02:05.636679 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.79s 2026-03-09 01:02:05.636687 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.66s 2026-03-09 01:02:05.636695 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-03-09 01:02:05.636703 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-03-09 01:02:08.670661 | orchestrator | 2026-03-09 01:02:08 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:08.671666 | orchestrator | 2026-03-09 01:02:08 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:02:08.671699 | orchestrator | 2026-03-09 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:11.721168 | orchestrator | 2026-03-09 01:02:11 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:11.722794 | orchestrator | 2026-03-09 01:02:11 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:02:11.723026 | orchestrator | 2026-03-09 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:14.762860 | orchestrator | 2026-03-09 01:02:14 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:14.764000 | orchestrator | 2026-03-09 01:02:14 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:02:14.764046 | orchestrator | 2026-03-09 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:17.804966 | orchestrator | 2026-03-09 01:02:17 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:17.807965 | orchestrator | 2026-03-09 01:02:17 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:02:17.808324 | orchestrator | 2026-03-09 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:20.854693 | orchestrator | 2026-03-09 01:02:20 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:20.857210 | orchestrator | 2026-03-09 01:02:20 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:02:20.857283 | orchestrator | 2026-03-09 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:23.910382 | orchestrator | 2026-03-09 01:02:23 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:23.912726 | orchestrator | 2026-03-09 01:02:23 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state STARTED 2026-03-09 01:02:23.913783 | orchestrator | 2026-03-09 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:26.951391 | orchestrator | 2026-03-09 01:02:26 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:26.954303 | orchestrator | 2026-03-09 01:02:26 | INFO  | Task ac970768-2006-499a-9dc4-f6dfa09451a3 is in state SUCCESS 2026-03-09 01:02:26.955254 | orchestrator | 2026-03-09 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:26.957101 | orchestrator | 2026-03-09 01:02:26.957151 | orchestrator | 2026-03-09 01:02:26.957160 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-09 01:02:26.957166 | orchestrator | 2026-03-09 01:02:26.957172 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-09 01:02:26.957179 | orchestrator | Monday 09 March 2026 00:59:11 +0000 (0:00:00.095) 0:00:00.095 ********** 2026-03-09 01:02:26.957185 | orchestrator | ok: [localhost] => { 2026-03-09 01:02:26.957193 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-09 01:02:26.957199 | orchestrator | } 2026-03-09 01:02:26.957205 | orchestrator | 2026-03-09 01:02:26.957211 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-09 01:02:26.957216 | orchestrator | Monday 09 March 2026 00:59:11 +0000 (0:00:00.063) 0:00:00.159 ********** 2026-03-09 01:02:26.957222 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-09 01:02:26.957230 | orchestrator | ...ignoring 2026-03-09 01:02:26.957236 | orchestrator | 2026-03-09 01:02:26.957242 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-09 01:02:26.957249 | orchestrator | Monday 09 March 2026 00:59:14 +0000 (0:00:02.910) 0:00:03.070 ********** 2026-03-09 01:02:26.957255 | orchestrator | skipping: [localhost] 2026-03-09 01:02:26.957260 | orchestrator | 2026-03-09 01:02:26.957266 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-09 01:02:26.957273 | orchestrator | Monday 09 March 2026 00:59:14 +0000 (0:00:00.054) 0:00:03.124 ********** 2026-03-09 01:02:26.957279 | orchestrator | ok: [localhost] 2026-03-09 01:02:26.957285 | orchestrator | 2026-03-09 01:02:26.957291 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:02:26.957297 | orchestrator | 2026-03-09 01:02:26.957303 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:02:26.957309 | orchestrator | Monday 09 March 2026 00:59:14 +0000 (0:00:00.170) 0:00:03.295 ********** 2026-03-09 01:02:26.957315 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:02:26.957321 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:02:26.957327 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:02:26.957333 | orchestrator | 2026-03-09 01:02:26.957339 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:02:26.957346 | orchestrator | Monday 09 March 2026 00:59:14 +0000 (0:00:00.363) 0:00:03.658 ********** 2026-03-09 01:02:26.957352 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-09 01:02:26.957563 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-09 01:02:26.957575 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-09 01:02:26.957581 | orchestrator | 2026-03-09 01:02:26.957588 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-09 01:02:26.957594 | orchestrator | 2026-03-09 01:02:26.957600 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-09 01:02:26.957606 | orchestrator | Monday 09 March 2026 00:59:15 +0000 (0:00:00.670) 0:00:04.329 ********** 2026-03-09 01:02:26.957612 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-09 01:02:26.957618 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-09 01:02:26.957624 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-09 01:02:26.957630 | orchestrator | 2026-03-09 01:02:26.957636 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-09 01:02:26.957642 | orchestrator | Monday 09 March 2026 00:59:15 +0000 (0:00:00.384) 0:00:04.713 ********** 2026-03-09 01:02:26.957649 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:02:26.957657 | orchestrator | 2026-03-09 01:02:26.957663 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-09 01:02:26.957669 | orchestrator | Monday 09 March 2026 00:59:16 +0000 (0:00:00.542) 0:00:05.255 ********** 2026-03-09 01:02:26.957693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:02:26.957763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:02:26.957781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:02:26.957789 | orchestrator | 2026-03-09 01:02:26.957804 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-09 01:02:26.957811 | orchestrator | Monday 09 March 2026 00:59:19 +0000 (0:00:03.463) 0:00:08.719 ********** 2026-03-09 01:02:26.957817 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:26.957865 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.957873 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.957880 | orchestrator | 2026-03-09 01:02:26.957886 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-09 01:02:26.957893 | orchestrator | Monday 09 March 2026 00:59:20 +0000 (0:00:00.746) 0:00:09.465 ********** 2026-03-09 01:02:26.957899 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.957905 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.957912 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:26.957918 | orchestrator | 2026-03-09 01:02:26.957924 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-09 01:02:26.957931 | orchestrator | Monday 09 March 2026 00:59:22 +0000 (0:00:01.595) 0:00:11.061 ********** 2026-03-09 01:02:26.957946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:02:26.957960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:02:26.957971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:02:26.957983 | orchestrator | 2026-03-09 01:02:26.957989 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-09 01:02:26.957996 | orchestrator | Monday 09 March 2026 00:59:25 +0000 (0:00:03.741) 0:00:14.802 ********** 2026-03-09 01:02:26.958002 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.958009 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.958070 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:26.958079 | orchestrator | 2026-03-09 01:02:26.958086 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-09 01:02:26.958092 | orchestrator | Monday 09 March 2026 00:59:27 +0000 (0:00:01.262) 0:00:16.065 ********** 2026-03-09 01:02:26.958099 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:26.958106 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:02:26.958112 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:02:26.958119 | orchestrator | 2026-03-09 01:02:26.958125 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-09 01:02:26.958132 | orchestrator | Monday 09 March 2026 00:59:31 +0000 (0:00:04.819) 0:00:20.884 ********** 2026-03-09 01:02:26.958139 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:02:26.958145 | orchestrator | 2026-03-09 01:02:26.958152 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-09 01:02:26.958160 | orchestrator | Monday 09 March 2026 00:59:32 +0000 (0:00:00.602) 0:00:21.487 ********** 2026-03-09 01:02:26.958173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:02:26.958187 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.958198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:02:26.958206 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.958218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:02:26.958231 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:26.958238 | orchestrator | 2026-03-09 01:02:26.958244 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-09 01:02:26.958251 | orchestrator | Monday 09 March 2026 00:59:35 +0000 (0:00:03.081) 0:00:24.568 ********** 2026-03-09 01:02:26.958262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:02:26.958270 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.958283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:02:26.958294 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.958305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:02:26.958312 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:26.958319 | orchestrator | 2026-03-09 01:02:26.958325 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-09 01:02:26.958332 | orchestrator | Monday 09 March 2026 00:59:39 +0000 (0:00:03.968) 0:00:28.537 ********** 2026-03-09 01:02:26.958343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:02:26.958359 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.958369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:02:26.958377 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:26.958383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 01:02:26.958393 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.958400 | orchestrator | 2026-03-09 01:02:26.958407 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-09 01:02:26.958414 | orchestrator | Monday 09 March 2026 00:59:43 +0000 (0:00:03.774) 0:00:32.311 ********** 2026-03-09 01:02:26.958429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:02:26.958438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:02:26.958460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 01:02:26.958467 | orchestrator | 2026-03-09 01:02:26.958473 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-09 01:02:26.958480 | orchestrator | Monday 09 March 2026 00:59:47 +0000 (0:00:04.320) 0:00:36.631 ********** 2026-03-09 01:02:26.958486 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:26.958493 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:02:26.958499 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:02:26.958506 | orchestrator | 2026-03-09 01:02:26.958513 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-09 01:02:26.958520 | orchestrator | Monday 09 March 2026 00:59:48 +0000 (0:00:00.873) 0:00:37.505 ********** 2026-03-09 01:02:26.958527 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:02:26.958535 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:02:26.958542 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:02:26.958549 | orchestrator | 2026-03-09 01:02:26.958556 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-09 01:02:26.958563 | orchestrator | Monday 09 March 2026 00:59:48 +0000 (0:00:00.504) 0:00:38.009 ********** 2026-03-09 01:02:26.958570 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:02:26.958576 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:02:26.958583 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:02:26.958589 | orchestrator | 2026-03-09 01:02:26.958595 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-09 01:02:26.958602 | orchestrator | Monday 09 March 2026 00:59:49 +0000 (0:00:00.394) 0:00:38.404 ********** 2026-03-09 01:02:26.958609 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-09 01:02:26.958617 | orchestrator | ...ignoring 2026-03-09 01:02:26.958624 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-09 01:02:26.958635 | orchestrator | ...ignoring 2026-03-09 01:02:26.958642 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-09 01:02:26.958649 | orchestrator | ...ignoring 2026-03-09 01:02:26.958655 | orchestrator | 2026-03-09 01:02:26.958661 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-09 01:02:26.958667 | orchestrator | Monday 09 March 2026 01:00:00 +0000 (0:00:11.115) 0:00:49.519 ********** 2026-03-09 01:02:26.958673 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:02:26.958679 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:02:26.958686 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:02:26.958693 | orchestrator | 2026-03-09 01:02:26.958700 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-09 01:02:26.958707 | orchestrator | Monday 09 March 2026 01:00:00 +0000 (0:00:00.459) 0:00:49.979 ********** 2026-03-09 01:02:26.958715 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:26.958722 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.958729 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.958736 | orchestrator | 2026-03-09 01:02:26.958743 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-09 01:02:26.958750 | orchestrator | Monday 09 March 2026 01:00:01 +0000 (0:00:00.821) 0:00:50.800 ********** 2026-03-09 01:02:26.958757 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:26.958764 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.958771 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.958778 | orchestrator | 2026-03-09 01:02:26.958785 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-09 01:02:26.958792 | orchestrator | Monday 09 March 2026 01:00:02 +0000 (0:00:00.582) 0:00:51.383 ********** 2026-03-09 01:02:26.958798 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:26.958805 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.958812 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.958819 | orchestrator | 2026-03-09 01:02:26.958845 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-09 01:02:26.958857 | orchestrator | Monday 09 March 2026 01:00:02 +0000 (0:00:00.511) 0:00:51.894 ********** 2026-03-09 01:02:26.958864 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:02:26.958871 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:02:26.958877 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:02:26.958884 | orchestrator | 2026-03-09 01:02:26.958890 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-09 01:02:26.958897 | orchestrator | Monday 09 March 2026 01:00:03 +0000 (0:00:00.503) 0:00:52.397 ********** 2026-03-09 01:02:26.958903 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:26.958910 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.958916 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.958923 | orchestrator | 2026-03-09 01:02:26.958930 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-09 01:02:26.958936 | orchestrator | Monday 09 March 2026 01:00:04 +0000 (0:00:00.788) 0:00:53.185 ********** 2026-03-09 01:02:26.958943 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.958950 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.958956 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-09 01:02:26.958963 | orchestrator | 2026-03-09 01:02:26.958969 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-09 01:02:26.958976 | orchestrator | Monday 09 March 2026 01:00:04 +0000 (0:00:00.513) 0:00:53.699 ********** 2026-03-09 01:02:26.958982 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:26.958989 | orchestrator | 2026-03-09 01:02:26.958995 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-09 01:02:26.959006 | orchestrator | Monday 09 March 2026 01:00:15 +0000 (0:00:10.990) 0:01:04.690 ********** 2026-03-09 01:02:26.959018 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:02:26.959025 | orchestrator | 2026-03-09 01:02:26.959031 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-09 01:02:26.959038 | orchestrator | Monday 09 March 2026 01:00:15 +0000 (0:00:00.134) 0:01:04.824 ********** 2026-03-09 01:02:26.959045 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:26.959052 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.959059 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.959065 | orchestrator | 2026-03-09 01:02:26.959072 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-09 01:02:26.959078 | orchestrator | Monday 09 March 2026 01:00:16 +0000 (0:00:01.042) 0:01:05.867 ********** 2026-03-09 01:02:26.959084 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:26.959091 | orchestrator | 2026-03-09 01:02:26.959097 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-09 01:02:26.959104 | orchestrator | Monday 09 March 2026 01:00:25 +0000 (0:00:08.654) 0:01:14.521 ********** 2026-03-09 01:02:26.959110 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:02:26.959117 | orchestrator | 2026-03-09 01:02:26.959123 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-09 01:02:26.959130 | orchestrator | Monday 09 March 2026 01:00:28 +0000 (0:00:02.715) 0:01:17.237 ********** 2026-03-09 01:02:26.959136 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:02:26.959142 | orchestrator | 2026-03-09 01:02:26.959149 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-09 01:02:26.959155 | orchestrator | Monday 09 March 2026 01:00:31 +0000 (0:00:03.682) 0:01:20.920 ********** 2026-03-09 01:02:26.959162 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:26.959168 | orchestrator | 2026-03-09 01:02:26.959176 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-09 01:02:26.959182 | orchestrator | Monday 09 March 2026 01:00:32 +0000 (0:00:00.205) 0:01:21.126 ********** 2026-03-09 01:02:26.959189 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:26.959195 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.959202 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.959208 | orchestrator | 2026-03-09 01:02:26.959215 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-09 01:02:26.959221 | orchestrator | Monday 09 March 2026 01:00:32 +0000 (0:00:00.458) 0:01:21.584 ********** 2026-03-09 01:02:26.959228 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:26.959234 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-09 01:02:26.959240 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:02:26.959247 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:02:26.959252 | orchestrator | 2026-03-09 01:02:26.959259 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-09 01:02:26.959265 | orchestrator | skipping: no hosts matched 2026-03-09 01:02:26.959271 | orchestrator | 2026-03-09 01:02:26.959277 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-09 01:02:26.959283 | orchestrator | 2026-03-09 01:02:26.959288 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-09 01:02:26.959293 | orchestrator | Monday 09 March 2026 01:00:33 +0000 (0:00:00.772) 0:01:22.357 ********** 2026-03-09 01:02:26.959298 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:02:26.959304 | orchestrator | 2026-03-09 01:02:26.959309 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-09 01:02:26.959314 | orchestrator | Monday 09 March 2026 01:00:57 +0000 (0:00:24.245) 0:01:46.602 ********** 2026-03-09 01:02:26.959319 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:02:26.959325 | orchestrator | 2026-03-09 01:02:26.959331 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-09 01:02:26.959336 | orchestrator | Monday 09 March 2026 01:01:09 +0000 (0:00:11.594) 0:01:58.196 ********** 2026-03-09 01:02:26.959342 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:02:26.959353 | orchestrator | 2026-03-09 01:02:26.959359 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-09 01:02:26.959365 | orchestrator | 2026-03-09 01:02:26.959370 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-09 01:02:26.959376 | orchestrator | Monday 09 March 2026 01:01:11 +0000 (0:00:02.717) 0:02:00.914 ********** 2026-03-09 01:02:26.959381 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:02:26.959386 | orchestrator | 2026-03-09 01:02:26.959392 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-09 01:02:26.959404 | orchestrator | Monday 09 March 2026 01:01:30 +0000 (0:00:18.969) 0:02:19.884 ********** 2026-03-09 01:02:26.959410 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:02:26.959417 | orchestrator | 2026-03-09 01:02:26.959424 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-09 01:02:26.959430 | orchestrator | Monday 09 March 2026 01:01:47 +0000 (0:00:16.658) 0:02:36.543 ********** 2026-03-09 01:02:26.959436 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:02:26.959443 | orchestrator | 2026-03-09 01:02:26.959450 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-09 01:02:26.959456 | orchestrator | 2026-03-09 01:02:26.959463 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-09 01:02:26.959469 | orchestrator | Monday 09 March 2026 01:01:50 +0000 (0:00:02.809) 0:02:39.353 ********** 2026-03-09 01:02:26.959475 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:26.959481 | orchestrator | 2026-03-09 01:02:26.959488 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-09 01:02:26.959494 | orchestrator | Monday 09 March 2026 01:02:03 +0000 (0:00:13.581) 0:02:52.934 ********** 2026-03-09 01:02:26.959501 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:02:26.959508 | orchestrator | 2026-03-09 01:02:26.959514 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-09 01:02:26.959520 | orchestrator | Monday 09 March 2026 01:02:09 +0000 (0:00:05.602) 0:02:58.537 ********** 2026-03-09 01:02:26.959526 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:02:26.959532 | orchestrator | 2026-03-09 01:02:26.959538 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-09 01:02:26.959545 | orchestrator | 2026-03-09 01:02:26.959551 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-09 01:02:26.959562 | orchestrator | Monday 09 March 2026 01:02:12 +0000 (0:00:02.761) 0:03:01.298 ********** 2026-03-09 01:02:26.959568 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:02:26.959575 | orchestrator | 2026-03-09 01:02:26.959581 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-09 01:02:26.959588 | orchestrator | Monday 09 March 2026 01:02:12 +0000 (0:00:00.630) 0:03:01.929 ********** 2026-03-09 01:02:26.959594 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.959600 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.959606 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:26.959612 | orchestrator | 2026-03-09 01:02:26.959619 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-09 01:02:26.959626 | orchestrator | Monday 09 March 2026 01:02:15 +0000 (0:00:02.577) 0:03:04.506 ********** 2026-03-09 01:02:26.959632 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.959639 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.959645 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:26.959652 | orchestrator | 2026-03-09 01:02:26.959658 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-09 01:02:26.959665 | orchestrator | Monday 09 March 2026 01:02:17 +0000 (0:00:02.378) 0:03:06.884 ********** 2026-03-09 01:02:26.959671 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.959678 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.959684 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:26.959690 | orchestrator | 2026-03-09 01:02:26.959697 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-09 01:02:26.959711 | orchestrator | Monday 09 March 2026 01:02:20 +0000 (0:00:02.373) 0:03:09.258 ********** 2026-03-09 01:02:26.959717 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.959724 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.959731 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:02:26.959737 | orchestrator | 2026-03-09 01:02:26.959744 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-09 01:02:26.959751 | orchestrator | Monday 09 March 2026 01:02:22 +0000 (0:00:02.326) 0:03:11.585 ********** 2026-03-09 01:02:26.959757 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:02:26.959764 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:02:26.959771 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:02:26.959777 | orchestrator | 2026-03-09 01:02:26.959783 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-09 01:02:26.959790 | orchestrator | Monday 09 March 2026 01:02:25 +0000 (0:00:03.401) 0:03:14.987 ********** 2026-03-09 01:02:26.959796 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:02:26.959803 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:02:26.959809 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:02:26.959816 | orchestrator | 2026-03-09 01:02:26.959823 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:02:26.959851 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-09 01:02:26.959859 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-09 01:02:26.959868 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-09 01:02:26.959874 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-09 01:02:26.959881 | orchestrator | 2026-03-09 01:02:26.959887 | orchestrator | 2026-03-09 01:02:26.959894 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:02:26.959900 | orchestrator | Monday 09 March 2026 01:02:26 +0000 (0:00:00.246) 0:03:15.233 ********** 2026-03-09 01:02:26.959907 | orchestrator | =============================================================================== 2026-03-09 01:02:26.959913 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 43.22s 2026-03-09 01:02:26.959920 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 28.25s 2026-03-09 01:02:26.959933 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 13.58s 2026-03-09 01:02:26.959940 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.12s 2026-03-09 01:02:26.959947 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.99s 2026-03-09 01:02:26.959953 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.65s 2026-03-09 01:02:26.959959 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.60s 2026-03-09 01:02:26.959966 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.53s 2026-03-09 01:02:26.959972 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.82s 2026-03-09 01:02:26.959979 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.32s 2026-03-09 01:02:26.959985 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.97s 2026-03-09 01:02:26.959992 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.77s 2026-03-09 01:02:26.959999 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.74s 2026-03-09 01:02:26.960005 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 3.68s 2026-03-09 01:02:26.960017 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.46s 2026-03-09 01:02:26.960023 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.40s 2026-03-09 01:02:26.960035 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.08s 2026-03-09 01:02:26.960041 | orchestrator | Check MariaDB service --------------------------------------------------- 2.91s 2026-03-09 01:02:26.960047 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.76s 2026-03-09 01:02:26.960054 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.72s 2026-03-09 01:02:30.009542 | orchestrator | 2026-03-09 01:02:30 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:30.010795 | orchestrator | 2026-03-09 01:02:30 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:02:30.012766 | orchestrator | 2026-03-09 01:02:30 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:02:30.013283 | orchestrator | 2026-03-09 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:33.077520 | orchestrator | 2026-03-09 01:02:33 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:33.077612 | orchestrator | 2026-03-09 01:02:33 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:02:33.077622 | orchestrator | 2026-03-09 01:02:33 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:02:33.077629 | orchestrator | 2026-03-09 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:36.113478 | orchestrator | 2026-03-09 01:02:36 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:36.115979 | orchestrator | 2026-03-09 01:02:36 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:02:36.118916 | orchestrator | 2026-03-09 01:02:36 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:02:36.118982 | orchestrator | 2026-03-09 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:39.160282 | orchestrator | 2026-03-09 01:02:39 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:39.163137 | orchestrator | 2026-03-09 01:02:39 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:02:39.163843 | orchestrator | 2026-03-09 01:02:39 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:02:39.164071 | orchestrator | 2026-03-09 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:42.203076 | orchestrator | 2026-03-09 01:02:42 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:42.203429 | orchestrator | 2026-03-09 01:02:42 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:02:42.204631 | orchestrator | 2026-03-09 01:02:42 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:02:42.204672 | orchestrator | 2026-03-09 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:45.255147 | orchestrator | 2026-03-09 01:02:45 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:45.258650 | orchestrator | 2026-03-09 01:02:45 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:02:45.260184 | orchestrator | 2026-03-09 01:02:45 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:02:45.260223 | orchestrator | 2026-03-09 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:48.292026 | orchestrator | 2026-03-09 01:02:48 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:48.296252 | orchestrator | 2026-03-09 01:02:48 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:02:48.298297 | orchestrator | 2026-03-09 01:02:48 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:02:48.298346 | orchestrator | 2026-03-09 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:51.335459 | orchestrator | 2026-03-09 01:02:51 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:51.337346 | orchestrator | 2026-03-09 01:02:51 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:02:51.339514 | orchestrator | 2026-03-09 01:02:51 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:02:51.339681 | orchestrator | 2026-03-09 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:54.386304 | orchestrator | 2026-03-09 01:02:54 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:54.388665 | orchestrator | 2026-03-09 01:02:54 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:02:54.391756 | orchestrator | 2026-03-09 01:02:54 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:02:54.391808 | orchestrator | 2026-03-09 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:57.445181 | orchestrator | 2026-03-09 01:02:57 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:02:57.447143 | orchestrator | 2026-03-09 01:02:57 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:02:57.448349 | orchestrator | 2026-03-09 01:02:57 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:02:57.448932 | orchestrator | 2026-03-09 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:00.481260 | orchestrator | 2026-03-09 01:03:00 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:03:00.485682 | orchestrator | 2026-03-09 01:03:00 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:00.486942 | orchestrator | 2026-03-09 01:03:00 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:00.486989 | orchestrator | 2026-03-09 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:03.540771 | orchestrator | 2026-03-09 01:03:03 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:03:03.546218 | orchestrator | 2026-03-09 01:03:03 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:03.548586 | orchestrator | 2026-03-09 01:03:03 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:03.548713 | orchestrator | 2026-03-09 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:06.600464 | orchestrator | 2026-03-09 01:03:06 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:03:06.603756 | orchestrator | 2026-03-09 01:03:06 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:06.606291 | orchestrator | 2026-03-09 01:03:06 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:06.606354 | orchestrator | 2026-03-09 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:09.652608 | orchestrator | 2026-03-09 01:03:09 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:03:09.655458 | orchestrator | 2026-03-09 01:03:09 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:09.656536 | orchestrator | 2026-03-09 01:03:09 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:09.656662 | orchestrator | 2026-03-09 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:12.695465 | orchestrator | 2026-03-09 01:03:12 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:03:12.697529 | orchestrator | 2026-03-09 01:03:12 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:12.699714 | orchestrator | 2026-03-09 01:03:12 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:12.699767 | orchestrator | 2026-03-09 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:15.741863 | orchestrator | 2026-03-09 01:03:15 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:03:15.743518 | orchestrator | 2026-03-09 01:03:15 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:15.745797 | orchestrator | 2026-03-09 01:03:15 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:15.745843 | orchestrator | 2026-03-09 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:18.802973 | orchestrator | 2026-03-09 01:03:18 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:03:18.805137 | orchestrator | 2026-03-09 01:03:18 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:18.807377 | orchestrator | 2026-03-09 01:03:18 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:18.807435 | orchestrator | 2026-03-09 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:21.853526 | orchestrator | 2026-03-09 01:03:21 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:03:21.853647 | orchestrator | 2026-03-09 01:03:21 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:21.856061 | orchestrator | 2026-03-09 01:03:21 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:21.856722 | orchestrator | 2026-03-09 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:24.910636 | orchestrator | 2026-03-09 01:03:24 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state STARTED 2026-03-09 01:03:24.912736 | orchestrator | 2026-03-09 01:03:24 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:24.916378 | orchestrator | 2026-03-09 01:03:24 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:24.916449 | orchestrator | 2026-03-09 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:27.958412 | orchestrator | 2026-03-09 01:03:27 | INFO  | Task b36d5631-90fa-4028-9051-93bb262ce134 is in state SUCCESS 2026-03-09 01:03:27.959984 | orchestrator | 2026-03-09 01:03:27.960049 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-09 01:03:27.960065 | orchestrator | 2.16.14 2026-03-09 01:03:27.960078 | orchestrator | 2026-03-09 01:03:27.960090 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-09 01:03:27.960101 | orchestrator | 2026-03-09 01:03:27.960111 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-09 01:03:27.960122 | orchestrator | Monday 09 March 2026 01:01:13 +0000 (0:00:00.641) 0:00:00.641 ********** 2026-03-09 01:03:27.960152 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:03:27.960161 | orchestrator | 2026-03-09 01:03:27.960171 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-09 01:03:27.960182 | orchestrator | Monday 09 March 2026 01:01:14 +0000 (0:00:00.684) 0:00:01.326 ********** 2026-03-09 01:03:27.960192 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:03:27.960203 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:03:27.960295 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:03:27.960435 | orchestrator | 2026-03-09 01:03:27.960442 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-09 01:03:27.960663 | orchestrator | Monday 09 March 2026 01:01:15 +0000 (0:00:00.808) 0:00:02.134 ********** 2026-03-09 01:03:27.960679 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:03:27.960688 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:03:27.960698 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:03:27.960708 | orchestrator | 2026-03-09 01:03:27.960719 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-09 01:03:27.960729 | orchestrator | Monday 09 March 2026 01:01:15 +0000 (0:00:00.320) 0:00:02.454 ********** 2026-03-09 01:03:27.960741 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:03:27.960751 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:03:27.960761 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:03:27.960772 | orchestrator | 2026-03-09 01:03:27.960810 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-09 01:03:27.960822 | orchestrator | Monday 09 March 2026 01:01:16 +0000 (0:00:00.852) 0:00:03.307 ********** 2026-03-09 01:03:27.960833 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:03:27.960844 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:03:27.960855 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:03:27.960865 | orchestrator | 2026-03-09 01:03:27.960877 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-09 01:03:27.960883 | orchestrator | Monday 09 March 2026 01:01:16 +0000 (0:00:00.326) 0:00:03.633 ********** 2026-03-09 01:03:27.960890 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:03:27.960896 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:03:27.960903 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:03:27.960909 | orchestrator | 2026-03-09 01:03:27.960915 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-09 01:03:27.960922 | orchestrator | Monday 09 March 2026 01:01:17 +0000 (0:00:00.306) 0:00:03.939 ********** 2026-03-09 01:03:27.960928 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:03:27.960982 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:03:27.960996 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:03:27.961007 | orchestrator | 2026-03-09 01:03:27.961018 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-09 01:03:27.961285 | orchestrator | Monday 09 March 2026 01:01:17 +0000 (0:00:00.347) 0:00:04.287 ********** 2026-03-09 01:03:27.961300 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.961307 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.961313 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.961319 | orchestrator | 2026-03-09 01:03:27.961326 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-09 01:03:27.961333 | orchestrator | Monday 09 March 2026 01:01:17 +0000 (0:00:00.533) 0:00:04.821 ********** 2026-03-09 01:03:27.961339 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:03:27.961345 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:03:27.961352 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:03:27.961358 | orchestrator | 2026-03-09 01:03:27.961364 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-09 01:03:27.961370 | orchestrator | Monday 09 March 2026 01:01:18 +0000 (0:00:00.318) 0:00:05.140 ********** 2026-03-09 01:03:27.961376 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 01:03:27.961383 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 01:03:27.961401 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 01:03:27.961407 | orchestrator | 2026-03-09 01:03:27.961413 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-09 01:03:27.961419 | orchestrator | Monday 09 March 2026 01:01:19 +0000 (0:00:00.761) 0:00:05.901 ********** 2026-03-09 01:03:27.961426 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:03:27.961444 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:03:27.961455 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:03:27.961465 | orchestrator | 2026-03-09 01:03:27.961474 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-09 01:03:27.961483 | orchestrator | Monday 09 March 2026 01:01:19 +0000 (0:00:00.491) 0:00:06.392 ********** 2026-03-09 01:03:27.961493 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 01:03:27.961502 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 01:03:27.961511 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 01:03:27.961522 | orchestrator | 2026-03-09 01:03:27.961532 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-09 01:03:27.961542 | orchestrator | Monday 09 March 2026 01:01:21 +0000 (0:00:02.227) 0:00:08.620 ********** 2026-03-09 01:03:27.961553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-09 01:03:27.961563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-09 01:03:27.961573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-09 01:03:27.961583 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.961593 | orchestrator | 2026-03-09 01:03:27.961670 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-09 01:03:27.961685 | orchestrator | Monday 09 March 2026 01:01:22 +0000 (0:00:00.667) 0:00:09.287 ********** 2026-03-09 01:03:27.961695 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.961705 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.961711 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.961718 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.961724 | orchestrator | 2026-03-09 01:03:27.961730 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-09 01:03:27.961737 | orchestrator | Monday 09 March 2026 01:01:23 +0000 (0:00:00.887) 0:00:10.175 ********** 2026-03-09 01:03:27.961745 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.961754 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.961761 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.961776 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.961782 | orchestrator | 2026-03-09 01:03:27.961788 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-09 01:03:27.961795 | orchestrator | Monday 09 March 2026 01:01:23 +0000 (0:00:00.408) 0:00:10.583 ********** 2026-03-09 01:03:27.961808 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6c13f8ca3195', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-09 01:01:20.225717', 'end': '2026-03-09 01:01:20.263508', 'delta': '0:00:00.037791', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6c13f8ca3195'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-09 01:03:27.961818 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '362b4b19aa5d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-09 01:01:21.005365', 'end': '2026-03-09 01:01:21.048198', 'delta': '0:00:00.042833', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['362b4b19aa5d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-09 01:03:27.961852 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd5f2afaada34', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-09 01:01:21.545546', 'end': '2026-03-09 01:01:21.583639', 'delta': '0:00:00.038093', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d5f2afaada34'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-09 01:03:27.961864 | orchestrator | 2026-03-09 01:03:27.961874 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-09 01:03:27.961887 | orchestrator | Monday 09 March 2026 01:01:23 +0000 (0:00:00.231) 0:00:10.814 ********** 2026-03-09 01:03:27.961899 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:03:27.961909 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:03:27.961920 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:03:27.961931 | orchestrator | 2026-03-09 01:03:27.961997 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-09 01:03:27.962009 | orchestrator | Monday 09 March 2026 01:01:24 +0000 (0:00:00.487) 0:00:11.302 ********** 2026-03-09 01:03:27.962065 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-09 01:03:27.962078 | orchestrator | 2026-03-09 01:03:27.962109 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-09 01:03:27.962121 | orchestrator | Monday 09 March 2026 01:01:26 +0000 (0:00:01.659) 0:00:12.962 ********** 2026-03-09 01:03:27.962142 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.962152 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.962163 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.962170 | orchestrator | 2026-03-09 01:03:27.962176 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-09 01:03:27.962183 | orchestrator | Monday 09 March 2026 01:01:26 +0000 (0:00:00.325) 0:00:13.287 ********** 2026-03-09 01:03:27.962189 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.962195 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.962201 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.962207 | orchestrator | 2026-03-09 01:03:27.962213 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-09 01:03:27.962219 | orchestrator | Monday 09 March 2026 01:01:26 +0000 (0:00:00.454) 0:00:13.742 ********** 2026-03-09 01:03:27.962225 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.962232 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.962238 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.962244 | orchestrator | 2026-03-09 01:03:27.962250 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-09 01:03:27.962256 | orchestrator | Monday 09 March 2026 01:01:27 +0000 (0:00:00.548) 0:00:14.290 ********** 2026-03-09 01:03:27.962263 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:03:27.962269 | orchestrator | 2026-03-09 01:03:27.962275 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-09 01:03:27.962281 | orchestrator | Monday 09 March 2026 01:01:27 +0000 (0:00:00.146) 0:00:14.437 ********** 2026-03-09 01:03:27.962287 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.962293 | orchestrator | 2026-03-09 01:03:27.962299 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-09 01:03:27.962306 | orchestrator | Monday 09 March 2026 01:01:27 +0000 (0:00:00.240) 0:00:14.678 ********** 2026-03-09 01:03:27.962312 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.962318 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.962324 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.962330 | orchestrator | 2026-03-09 01:03:27.962336 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-09 01:03:27.962343 | orchestrator | Monday 09 March 2026 01:01:28 +0000 (0:00:00.317) 0:00:14.995 ********** 2026-03-09 01:03:27.962349 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.962355 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.962361 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.962367 | orchestrator | 2026-03-09 01:03:27.962373 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-09 01:03:27.962379 | orchestrator | Monday 09 March 2026 01:01:28 +0000 (0:00:00.336) 0:00:15.332 ********** 2026-03-09 01:03:27.962385 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.962397 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.962403 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.962410 | orchestrator | 2026-03-09 01:03:27.962416 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-09 01:03:27.962422 | orchestrator | Monday 09 March 2026 01:01:28 +0000 (0:00:00.537) 0:00:15.869 ********** 2026-03-09 01:03:27.962428 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.962434 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.962440 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.962447 | orchestrator | 2026-03-09 01:03:27.962453 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-09 01:03:27.962459 | orchestrator | Monday 09 March 2026 01:01:29 +0000 (0:00:00.344) 0:00:16.213 ********** 2026-03-09 01:03:27.962465 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.962471 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.962477 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.962483 | orchestrator | 2026-03-09 01:03:27.962490 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-09 01:03:27.962501 | orchestrator | Monday 09 March 2026 01:01:29 +0000 (0:00:00.339) 0:00:16.553 ********** 2026-03-09 01:03:27.962567 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.962576 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.962582 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.962610 | orchestrator | 2026-03-09 01:03:27.962617 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-09 01:03:27.962624 | orchestrator | Monday 09 March 2026 01:01:29 +0000 (0:00:00.314) 0:00:16.867 ********** 2026-03-09 01:03:27.962630 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.962636 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.962642 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.962648 | orchestrator | 2026-03-09 01:03:27.962655 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-09 01:03:27.962661 | orchestrator | Monday 09 March 2026 01:01:30 +0000 (0:00:00.566) 0:00:17.434 ********** 2026-03-09 01:03:27.962669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a76ca51e--4549--54be--bcb5--a2c49bca5f85-osd--block--a76ca51e--4549--54be--bcb5--a2c49bca5f85', 'dm-uuid-LVM-w3KmgfdCLRCz1nzP1ZpO9H9pHqJp1r7WcbHFA9REnlGsm5wfiRuHIIAZJeZFEBOr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--30c2fd4e--0770--5a21--8e5f--9ea8386abee3-osd--block--30c2fd4e--0770--5a21--8e5f--9ea8386abee3', 'dm-uuid-LVM-2DzBRdoHI7a6R3hiAm39d4nXHwL76disOJvxLFpTMn4O8Cnk33qSzCmskqV7mLMX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part1', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part14', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part15', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part16', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:03:27.962782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a76ca51e--4549--54be--bcb5--a2c49bca5f85-osd--block--a76ca51e--4549--54be--bcb5--a2c49bca5f85'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-50MCkw-QrFW-3czy-Y4uM-IwOG-BDk8-HCbrtU', 'scsi-0QEMU_QEMU_HARDDISK_741bb6ef-88fa-4baa-bfac-ed82f0dadf29', 'scsi-SQEMU_QEMU_HARDDISK_741bb6ef-88fa-4baa-bfac-ed82f0dadf29'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:03:27.962813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--30c2fd4e--0770--5a21--8e5f--9ea8386abee3-osd--block--30c2fd4e--0770--5a21--8e5f--9ea8386abee3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C1x2z5-fK0E-NcTN-wBoz-sr5t-Wo21-SbAqpG', 'scsi-0QEMU_QEMU_HARDDISK_320449d2-61ff-46fc-8f0d-ef8de6be542f', 'scsi-SQEMU_QEMU_HARDDISK_320449d2-61ff-46fc-8f0d-ef8de6be542f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:03:27.962822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17d99fae-d184-430d-aac6-01476d40e112', 'scsi-SQEMU_QEMU_HARDDISK_17d99fae-d184-430d-aac6-01476d40e112'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:03:27.962829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:03:27.962835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0-osd--block--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0', 'dm-uuid-LVM-Crn65bAtcJ8NY0QAXe6hc3ClXzBKgzu5c2fiklXOx2FAFa7GdHF2ubYcMum8p8wZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1060daf8--ac1b--51e4--8c2b--8176ae449cc2-osd--block--1060daf8--ac1b--51e4--8c2b--8176ae449cc2', 'dm-uuid-LVM-fcEfuB2607j6ZYoUmX15C7Lmw7ILBQhowckmumsYlkuISJLIZtrE8JLpZYi3Ufhx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962955 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.962966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.962985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.963014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part1', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part14', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part15', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part16', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:03:27.963035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0-osd--block--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hZ2zsR-dpet-WtZx-YO63-Zyv2-SQcu-6wa4uF', 'scsi-0QEMU_QEMU_HARDDISK_fb37f328-fd68-494b-bcff-294494d86f6d', 'scsi-SQEMU_QEMU_HARDDISK_fb37f328-fd68-494b-bcff-294494d86f6d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:03:27.963045 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd-osd--block--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd', 'dm-uuid-LVM-py0FfaQCrNAhEvJbHPwFiO3HcjwJiOciI5fsD9hd11KDxNNfPJkoZovcROKbAqBo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.963056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1060daf8--ac1b--51e4--8c2b--8176ae449cc2-osd--block--1060daf8--ac1b--51e4--8c2b--8176ae449cc2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WV45Xj-T3Dy-wDPY-kBFk-cqa0-nBae-ixHoA9', 'scsi-0QEMU_QEMU_HARDDISK_771f98cb-74e3-479e-8ec9-00fdc11a8238', 'scsi-SQEMU_QEMU_HARDDISK_771f98cb-74e3-479e-8ec9-00fdc11a8238'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:03:27.963065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bfced398--94c6--51d2--a38a--d9d8acf734fd-osd--block--bfced398--94c6--51d2--a38a--d9d8acf734fd', 'dm-uuid-LVM-H8lwa76xLUMSSuogPAeG6nzZ4hft20bqk0pAjtaLPc53vzwN0pGL74vNP6IJxLA6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.963085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51b9e2da-28ed-40a7-8c18-598646420d16', 'scsi-SQEMU_QEMU_HARDDISK_51b9e2da-28ed-40a7-8c18-598646420d16'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:03:27.963104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.963116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.963127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:03:27.963137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.963148 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.963155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.963162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.963174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.963180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.963190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 01:03:27.963204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part1', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part14', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part15', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part16', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:03:27.963212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd-osd--block--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7pJXzq-1pyI-wtRg-uBiv-Ufc4-mOUb-oEBe2k', 'scsi-0QEMU_QEMU_HARDDISK_bf4da7fe-59ae-42e8-92ff-fb55dbc42396', 'scsi-SQEMU_QEMU_HARDDISK_bf4da7fe-59ae-42e8-92ff-fb55dbc42396'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:03:27.963224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bfced398--94c6--51d2--a38a--d9d8acf734fd-osd--block--bfced398--94c6--51d2--a38a--d9d8acf734fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4GU76F-SXdF-ds4a-84RK-BRIp-1hBV-STWcsg', 'scsi-0QEMU_QEMU_HARDDISK_d616dde6-c913-49b8-b8ef-90f7cc767ff0', 'scsi-SQEMU_QEMU_HARDDISK_d616dde6-c913-49b8-b8ef-90f7cc767ff0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:03:27.963236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad7d39e-c79f-49cf-9f83-32481f17a0bc', 'scsi-SQEMU_QEMU_HARDDISK_7ad7d39e-c79f-49cf-9f83-32481f17a0bc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:03:27.963249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 01:03:27.963257 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.963264 | orchestrator | 2026-03-09 01:03:27.963272 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-09 01:03:27.963280 | orchestrator | Monday 09 March 2026 01:01:31 +0000 (0:00:00.716) 0:00:18.151 ********** 2026-03-09 01:03:27.963288 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a76ca51e--4549--54be--bcb5--a2c49bca5f85-osd--block--a76ca51e--4549--54be--bcb5--a2c49bca5f85', 'dm-uuid-LVM-w3KmgfdCLRCz1nzP1ZpO9H9pHqJp1r7WcbHFA9REnlGsm5wfiRuHIIAZJeZFEBOr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963300 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--30c2fd4e--0770--5a21--8e5f--9ea8386abee3-osd--block--30c2fd4e--0770--5a21--8e5f--9ea8386abee3', 'dm-uuid-LVM-2DzBRdoHI7a6R3hiAm39d4nXHwL76disOJvxLFpTMn4O8Cnk33qSzCmskqV7mLMX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963317 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963332 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963342 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963370 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963381 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963392 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963410 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0-osd--block--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0', 'dm-uuid-LVM-Crn65bAtcJ8NY0QAXe6hc3ClXzBKgzu5c2fiklXOx2FAFa7GdHF2ubYcMum8p8wZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963427 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963443 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1060daf8--ac1b--51e4--8c2b--8176ae449cc2-osd--block--1060daf8--ac1b--51e4--8c2b--8176ae449cc2', 'dm-uuid-LVM-fcEfuB2607j6ZYoUmX15C7Lmw7ILBQhowckmumsYlkuISJLIZtrE8JLpZYi3Ufhx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963455 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part1', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part14', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part15', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part16', 'scsi-SQEMU_QEMU_HARDDISK_b3868cf7-4a53-4299-a9f2-4f48ea5905a3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963477 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963495 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a76ca51e--4549--54be--bcb5--a2c49bca5f85-osd--block--a76ca51e--4549--54be--bcb5--a2c49bca5f85'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-50MCkw-QrFW-3czy-Y4uM-IwOG-BDk8-HCbrtU', 'scsi-0QEMU_QEMU_HARDDISK_741bb6ef-88fa-4baa-bfac-ed82f0dadf29', 'scsi-SQEMU_QEMU_HARDDISK_741bb6ef-88fa-4baa-bfac-ed82f0dadf29'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963506 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963517 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--30c2fd4e--0770--5a21--8e5f--9ea8386abee3-osd--block--30c2fd4e--0770--5a21--8e5f--9ea8386abee3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C1x2z5-fK0E-NcTN-wBoz-sr5t-Wo21-SbAqpG', 'scsi-0QEMU_QEMU_HARDDISK_320449d2-61ff-46fc-8f0d-ef8de6be542f', 'scsi-SQEMU_QEMU_HARDDISK_320449d2-61ff-46fc-8f0d-ef8de6be542f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963531 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963538 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17d99fae-d184-430d-aac6-01476d40e112', 'scsi-SQEMU_QEMU_HARDDISK_17d99fae-d184-430d-aac6-01476d40e112'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963551 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963568 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963579 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963591 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963610 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963621 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963632 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.963655 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part1', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part14', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part15', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part16', 'scsi-SQEMU_QEMU_HARDDISK_b742876e-d11b-4355-b37d-f52f169b3127-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963670 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0-osd--block--330a9702--ab5a--5bf7--9b95--ebb8b4c554e0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hZ2zsR-dpet-WtZx-YO63-Zyv2-SQcu-6wa4uF', 'scsi-0QEMU_QEMU_HARDDISK_fb37f328-fd68-494b-bcff-294494d86f6d', 'scsi-SQEMU_QEMU_HARDDISK_fb37f328-fd68-494b-bcff-294494d86f6d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963684 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1060daf8--ac1b--51e4--8c2b--8176ae449cc2-osd--block--1060daf8--ac1b--51e4--8c2b--8176ae449cc2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WV45Xj-T3Dy-wDPY-kBFk-cqa0-nBae-ixHoA9', 'scsi-0QEMU_QEMU_HARDDISK_771f98cb-74e3-479e-8ec9-00fdc11a8238', 'scsi-SQEMU_QEMU_HARDDISK_771f98cb-74e3-479e-8ec9-00fdc11a8238'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963694 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51b9e2da-28ed-40a7-8c18-598646420d16', 'scsi-SQEMU_QEMU_HARDDISK_51b9e2da-28ed-40a7-8c18-598646420d16'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963706 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963713 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.963719 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd-osd--block--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd', 'dm-uuid-LVM-py0FfaQCrNAhEvJbHPwFiO3HcjwJiOciI5fsD9hd11KDxNNfPJkoZovcROKbAqBo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963730 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bfced398--94c6--51d2--a38a--d9d8acf734fd-osd--block--bfced398--94c6--51d2--a38a--d9d8acf734fd', 'dm-uuid-LVM-H8lwa76xLUMSSuogPAeG6nzZ4hft20bqk0pAjtaLPc53vzwN0pGL74vNP6IJxLA6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963737 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963747 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963754 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963766 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963772 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963783 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963790 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963796 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963811 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part1', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part14', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part15', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part16', 'scsi-SQEMU_QEMU_HARDDISK_b540138f-352a-495b-ba9e-a53eac3537c3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963825 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd-osd--block--2e0d7a52--9ca0--5b92--a6d3--76d99ccb83bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7pJXzq-1pyI-wtRg-uBiv-Ufc4-mOUb-oEBe2k', 'scsi-0QEMU_QEMU_HARDDISK_bf4da7fe-59ae-42e8-92ff-fb55dbc42396', 'scsi-SQEMU_QEMU_HARDDISK_bf4da7fe-59ae-42e8-92ff-fb55dbc42396'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963832 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--bfced398--94c6--51d2--a38a--d9d8acf734fd-osd--block--bfced398--94c6--51d2--a38a--d9d8acf734fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4GU76F-SXdF-ds4a-84RK-BRIp-1hBV-STWcsg', 'scsi-0QEMU_QEMU_HARDDISK_d616dde6-c913-49b8-b8ef-90f7cc767ff0', 'scsi-SQEMU_QEMU_HARDDISK_d616dde6-c913-49b8-b8ef-90f7cc767ff0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963842 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad7d39e-c79f-49cf-9f83-32481f17a0bc', 'scsi-SQEMU_QEMU_HARDDISK_7ad7d39e-c79f-49cf-9f83-32481f17a0bc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963852 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 01:03:27.963859 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.963865 | orchestrator | 2026-03-09 01:03:27.963872 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-09 01:03:27.963882 | orchestrator | Monday 09 March 2026 01:01:32 +0000 (0:00:00.783) 0:00:18.934 ********** 2026-03-09 01:03:27.963889 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:03:27.963895 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:03:27.963901 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:03:27.963908 | orchestrator | 2026-03-09 01:03:27.963914 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-09 01:03:27.963920 | orchestrator | Monday 09 March 2026 01:01:32 +0000 (0:00:00.779) 0:00:19.714 ********** 2026-03-09 01:03:27.963926 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:03:27.963932 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:03:27.963963 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:03:27.963970 | orchestrator | 2026-03-09 01:03:27.963976 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-09 01:03:27.963982 | orchestrator | Monday 09 March 2026 01:01:33 +0000 (0:00:00.570) 0:00:20.285 ********** 2026-03-09 01:03:27.963989 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:03:27.963995 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:03:27.964001 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:03:27.964007 | orchestrator | 2026-03-09 01:03:27.964013 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-09 01:03:27.964020 | orchestrator | Monday 09 March 2026 01:01:34 +0000 (0:00:00.709) 0:00:20.994 ********** 2026-03-09 01:03:27.964026 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.964032 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.964038 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.964044 | orchestrator | 2026-03-09 01:03:27.964051 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-09 01:03:27.964057 | orchestrator | Monday 09 March 2026 01:01:34 +0000 (0:00:00.312) 0:00:21.307 ********** 2026-03-09 01:03:27.964063 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.964069 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.964076 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.964082 | orchestrator | 2026-03-09 01:03:27.964088 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-09 01:03:27.964094 | orchestrator | Monday 09 March 2026 01:01:34 +0000 (0:00:00.424) 0:00:21.732 ********** 2026-03-09 01:03:27.964100 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.964106 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.964113 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.964230 | orchestrator | 2026-03-09 01:03:27.964239 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-09 01:03:27.964246 | orchestrator | Monday 09 March 2026 01:01:35 +0000 (0:00:00.519) 0:00:22.251 ********** 2026-03-09 01:03:27.964252 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-09 01:03:27.964259 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-09 01:03:27.964265 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-09 01:03:27.964271 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-09 01:03:27.964277 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-09 01:03:27.964283 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-09 01:03:27.964290 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-09 01:03:27.964296 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-09 01:03:27.964302 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-09 01:03:27.964308 | orchestrator | 2026-03-09 01:03:27.964315 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-09 01:03:27.964321 | orchestrator | Monday 09 March 2026 01:01:36 +0000 (0:00:00.869) 0:00:23.121 ********** 2026-03-09 01:03:27.964327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-09 01:03:27.964334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-09 01:03:27.964340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-09 01:03:27.964346 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.964359 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-09 01:03:27.964379 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-09 01:03:27.964386 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-09 01:03:27.964392 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.964398 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-09 01:03:27.964404 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-09 01:03:27.964410 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-09 01:03:27.964416 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.964423 | orchestrator | 2026-03-09 01:03:27.964429 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-09 01:03:27.964435 | orchestrator | Monday 09 March 2026 01:01:36 +0000 (0:00:00.375) 0:00:23.497 ********** 2026-03-09 01:03:27.964442 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:03:27.964449 | orchestrator | 2026-03-09 01:03:27.964456 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-09 01:03:27.964463 | orchestrator | Monday 09 March 2026 01:01:37 +0000 (0:00:00.770) 0:00:24.268 ********** 2026-03-09 01:03:27.964475 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.964481 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.964488 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.964497 | orchestrator | 2026-03-09 01:03:27.964507 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-09 01:03:27.964517 | orchestrator | Monday 09 March 2026 01:01:37 +0000 (0:00:00.341) 0:00:24.610 ********** 2026-03-09 01:03:27.964527 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.964538 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.964549 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.964558 | orchestrator | 2026-03-09 01:03:27.964565 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-09 01:03:27.964571 | orchestrator | Monday 09 March 2026 01:01:38 +0000 (0:00:00.339) 0:00:24.950 ********** 2026-03-09 01:03:27.964577 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.964585 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.964594 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:03:27.964604 | orchestrator | 2026-03-09 01:03:27.964611 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-09 01:03:27.964617 | orchestrator | Monday 09 March 2026 01:01:38 +0000 (0:00:00.319) 0:00:25.269 ********** 2026-03-09 01:03:27.964623 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:03:27.964629 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:03:27.964636 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:03:27.964642 | orchestrator | 2026-03-09 01:03:27.964648 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-09 01:03:27.964654 | orchestrator | Monday 09 March 2026 01:01:39 +0000 (0:00:00.975) 0:00:26.245 ********** 2026-03-09 01:03:27.964660 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:03:27.964666 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:03:27.964673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:03:27.964679 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.964685 | orchestrator | 2026-03-09 01:03:27.964691 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-09 01:03:27.964697 | orchestrator | Monday 09 March 2026 01:01:39 +0000 (0:00:00.419) 0:00:26.664 ********** 2026-03-09 01:03:27.964703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:03:27.964711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:03:27.964720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:03:27.964738 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.964747 | orchestrator | 2026-03-09 01:03:27.964757 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-09 01:03:27.964767 | orchestrator | Monday 09 March 2026 01:01:40 +0000 (0:00:00.410) 0:00:27.075 ********** 2026-03-09 01:03:27.964776 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 01:03:27.964785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 01:03:27.964794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 01:03:27.964802 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.964812 | orchestrator | 2026-03-09 01:03:27.964822 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-09 01:03:27.964831 | orchestrator | Monday 09 March 2026 01:01:40 +0000 (0:00:00.432) 0:00:27.508 ********** 2026-03-09 01:03:27.964841 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:03:27.964852 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:03:27.964862 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:03:27.964873 | orchestrator | 2026-03-09 01:03:27.964883 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-09 01:03:27.964894 | orchestrator | Monday 09 March 2026 01:01:40 +0000 (0:00:00.336) 0:00:27.844 ********** 2026-03-09 01:03:27.964903 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-09 01:03:27.964911 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-09 01:03:27.964919 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-09 01:03:27.964926 | orchestrator | 2026-03-09 01:03:27.964958 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-09 01:03:27.964968 | orchestrator | Monday 09 March 2026 01:01:41 +0000 (0:00:00.548) 0:00:28.392 ********** 2026-03-09 01:03:27.964975 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 01:03:27.964983 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 01:03:27.964991 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 01:03:27.964999 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-09 01:03:27.965014 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-09 01:03:27.965020 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-09 01:03:27.965027 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-09 01:03:27.965033 | orchestrator | 2026-03-09 01:03:27.965039 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-09 01:03:27.965045 | orchestrator | Monday 09 March 2026 01:01:42 +0000 (0:00:01.054) 0:00:29.447 ********** 2026-03-09 01:03:27.965051 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 01:03:27.965058 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 01:03:27.965064 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 01:03:27.965070 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-09 01:03:27.965076 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-09 01:03:27.965082 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-09 01:03:27.965094 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-09 01:03:27.965101 | orchestrator | 2026-03-09 01:03:27.965107 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-09 01:03:27.965117 | orchestrator | Monday 09 March 2026 01:01:44 +0000 (0:00:02.205) 0:00:31.653 ********** 2026-03-09 01:03:27.965127 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:03:27.965133 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:03:27.965140 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-09 01:03:27.965152 | orchestrator | 2026-03-09 01:03:27.965158 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-09 01:03:27.965165 | orchestrator | Monday 09 March 2026 01:01:45 +0000 (0:00:00.414) 0:00:32.067 ********** 2026-03-09 01:03:27.965172 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 01:03:27.965179 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 01:03:27.965186 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 01:03:27.965192 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 01:03:27.965199 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 01:03:27.965205 | orchestrator | 2026-03-09 01:03:27.965211 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-09 01:03:27.965217 | orchestrator | Monday 09 March 2026 01:02:30 +0000 (0:00:45.381) 0:01:17.448 ********** 2026-03-09 01:03:27.965223 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965230 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965236 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965242 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965248 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965255 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965261 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-09 01:03:27.965267 | orchestrator | 2026-03-09 01:03:27.965273 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-09 01:03:27.965279 | orchestrator | Monday 09 March 2026 01:02:55 +0000 (0:00:24.998) 0:01:42.447 ********** 2026-03-09 01:03:27.965285 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965291 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965297 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965307 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965313 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965319 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965325 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 01:03:27.965331 | orchestrator | 2026-03-09 01:03:27.965337 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-09 01:03:27.965349 | orchestrator | Monday 09 March 2026 01:03:08 +0000 (0:00:12.528) 0:01:54.975 ********** 2026-03-09 01:03:27.965355 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965361 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 01:03:27.965368 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 01:03:27.965374 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965380 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 01:03:27.965390 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 01:03:27.965396 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965402 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 01:03:27.965408 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 01:03:27.965415 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965421 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 01:03:27.965427 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 01:03:27.965433 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965439 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 01:03:27.965445 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 01:03:27.965451 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 01:03:27.965458 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 01:03:27.965464 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 01:03:27.965470 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-09 01:03:27.965476 | orchestrator | 2026-03-09 01:03:27.965483 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:03:27.965489 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-09 01:03:27.965497 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-09 01:03:27.965503 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-09 01:03:27.965510 | orchestrator | 2026-03-09 01:03:27.965516 | orchestrator | 2026-03-09 01:03:27.965522 | orchestrator | 2026-03-09 01:03:27.965528 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:03:27.965534 | orchestrator | Monday 09 March 2026 01:03:26 +0000 (0:00:18.210) 0:02:13.186 ********** 2026-03-09 01:03:27.965541 | orchestrator | =============================================================================== 2026-03-09 01:03:27.965547 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.38s 2026-03-09 01:03:27.965553 | orchestrator | generate keys ---------------------------------------------------------- 25.00s 2026-03-09 01:03:27.965559 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.21s 2026-03-09 01:03:27.965565 | orchestrator | get keys from monitors ------------------------------------------------- 12.53s 2026-03-09 01:03:27.965571 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.23s 2026-03-09 01:03:27.965577 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.21s 2026-03-09 01:03:27.965583 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.66s 2026-03-09 01:03:27.965594 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.05s 2026-03-09 01:03:27.965600 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.98s 2026-03-09 01:03:27.965607 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.89s 2026-03-09 01:03:27.965613 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.87s 2026-03-09 01:03:27.965619 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.85s 2026-03-09 01:03:27.965625 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.81s 2026-03-09 01:03:27.965631 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.78s 2026-03-09 01:03:27.965637 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.78s 2026-03-09 01:03:27.965648 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.77s 2026-03-09 01:03:27.965654 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.76s 2026-03-09 01:03:27.965660 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.72s 2026-03-09 01:03:27.965666 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.71s 2026-03-09 01:03:27.965672 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.69s 2026-03-09 01:03:27.965679 | orchestrator | 2026-03-09 01:03:27 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:27.965685 | orchestrator | 2026-03-09 01:03:27 | INFO  | Task 6b5dbac9-85db-4528-b777-8ed6625a2638 is in state STARTED 2026-03-09 01:03:27.965691 | orchestrator | 2026-03-09 01:03:27 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:27.965697 | orchestrator | 2026-03-09 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:31.017842 | orchestrator | 2026-03-09 01:03:31 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:31.027541 | orchestrator | 2026-03-09 01:03:31 | INFO  | Task 6b5dbac9-85db-4528-b777-8ed6625a2638 is in state STARTED 2026-03-09 01:03:31.030420 | orchestrator | 2026-03-09 01:03:31 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:31.030484 | orchestrator | 2026-03-09 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:34.092336 | orchestrator | 2026-03-09 01:03:34 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:34.094884 | orchestrator | 2026-03-09 01:03:34 | INFO  | Task 6b5dbac9-85db-4528-b777-8ed6625a2638 is in state STARTED 2026-03-09 01:03:34.097742 | orchestrator | 2026-03-09 01:03:34 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:34.098169 | orchestrator | 2026-03-09 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:37.146777 | orchestrator | 2026-03-09 01:03:37 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:37.150149 | orchestrator | 2026-03-09 01:03:37 | INFO  | Task 6b5dbac9-85db-4528-b777-8ed6625a2638 is in state STARTED 2026-03-09 01:03:37.152269 | orchestrator | 2026-03-09 01:03:37 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:37.152323 | orchestrator | 2026-03-09 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:40.199067 | orchestrator | 2026-03-09 01:03:40 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:40.202126 | orchestrator | 2026-03-09 01:03:40 | INFO  | Task 6b5dbac9-85db-4528-b777-8ed6625a2638 is in state STARTED 2026-03-09 01:03:40.205474 | orchestrator | 2026-03-09 01:03:40 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:40.205607 | orchestrator | 2026-03-09 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:43.246157 | orchestrator | 2026-03-09 01:03:43 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:43.247176 | orchestrator | 2026-03-09 01:03:43 | INFO  | Task 6b5dbac9-85db-4528-b777-8ed6625a2638 is in state STARTED 2026-03-09 01:03:43.249232 | orchestrator | 2026-03-09 01:03:43 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:43.249387 | orchestrator | 2026-03-09 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:46.296832 | orchestrator | 2026-03-09 01:03:46 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:46.298558 | orchestrator | 2026-03-09 01:03:46 | INFO  | Task 6b5dbac9-85db-4528-b777-8ed6625a2638 is in state STARTED 2026-03-09 01:03:46.301065 | orchestrator | 2026-03-09 01:03:46 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:46.301476 | orchestrator | 2026-03-09 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:49.344964 | orchestrator | 2026-03-09 01:03:49 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:49.345920 | orchestrator | 2026-03-09 01:03:49 | INFO  | Task 6b5dbac9-85db-4528-b777-8ed6625a2638 is in state STARTED 2026-03-09 01:03:49.347438 | orchestrator | 2026-03-09 01:03:49 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:49.347510 | orchestrator | 2026-03-09 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:52.398413 | orchestrator | 2026-03-09 01:03:52 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:52.398541 | orchestrator | 2026-03-09 01:03:52 | INFO  | Task 6b5dbac9-85db-4528-b777-8ed6625a2638 is in state STARTED 2026-03-09 01:03:52.398567 | orchestrator | 2026-03-09 01:03:52 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:52.399099 | orchestrator | 2026-03-09 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:55.442259 | orchestrator | 2026-03-09 01:03:55 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:55.443156 | orchestrator | 2026-03-09 01:03:55 | INFO  | Task 6b5dbac9-85db-4528-b777-8ed6625a2638 is in state STARTED 2026-03-09 01:03:55.446688 | orchestrator | 2026-03-09 01:03:55 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:55.446816 | orchestrator | 2026-03-09 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:58.489600 | orchestrator | 2026-03-09 01:03:58 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:03:58.491259 | orchestrator | 2026-03-09 01:03:58 | INFO  | Task 6b5dbac9-85db-4528-b777-8ed6625a2638 is in state STARTED 2026-03-09 01:03:58.493030 | orchestrator | 2026-03-09 01:03:58 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:03:58.493072 | orchestrator | 2026-03-09 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:01.548421 | orchestrator | 2026-03-09 01:04:01 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:01.549973 | orchestrator | 2026-03-09 01:04:01 | INFO  | Task 6b5dbac9-85db-4528-b777-8ed6625a2638 is in state STARTED 2026-03-09 01:04:01.553109 | orchestrator | 2026-03-09 01:04:01 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:04:01.553219 | orchestrator | 2026-03-09 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:04.609654 | orchestrator | 2026-03-09 01:04:04 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:04.609990 | orchestrator | 2026-03-09 01:04:04 | INFO  | Task 6b5dbac9-85db-4528-b777-8ed6625a2638 is in state STARTED 2026-03-09 01:04:04.611110 | orchestrator | 2026-03-09 01:04:04 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:04:04.611142 | orchestrator | 2026-03-09 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:07.659792 | orchestrator | 2026-03-09 01:04:07 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:07.663677 | orchestrator | 2026-03-09 01:04:07 | INFO  | Task 6b5dbac9-85db-4528-b777-8ed6625a2638 is in state STARTED 2026-03-09 01:04:07.665729 | orchestrator | 2026-03-09 01:04:07 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:04:07.665798 | orchestrator | 2026-03-09 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:10.710829 | orchestrator | 2026-03-09 01:04:10 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:10.711223 | orchestrator | 2026-03-09 01:04:10 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:10.712551 | orchestrator | 2026-03-09 01:04:10 | INFO  | Task 6b5dbac9-85db-4528-b777-8ed6625a2638 is in state SUCCESS 2026-03-09 01:04:10.713737 | orchestrator | 2026-03-09 01:04:10 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:04:10.713767 | orchestrator | 2026-03-09 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:13.769444 | orchestrator | 2026-03-09 01:04:13 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:13.770808 | orchestrator | 2026-03-09 01:04:13 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:13.772682 | orchestrator | 2026-03-09 01:04:13 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:04:13.772753 | orchestrator | 2026-03-09 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:16.818425 | orchestrator | 2026-03-09 01:04:16 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:16.821544 | orchestrator | 2026-03-09 01:04:16 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:16.826312 | orchestrator | 2026-03-09 01:04:16 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state STARTED 2026-03-09 01:04:16.826439 | orchestrator | 2026-03-09 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:19.873361 | orchestrator | 2026-03-09 01:04:19 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:19.875672 | orchestrator | 2026-03-09 01:04:19 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:19.878985 | orchestrator | 2026-03-09 01:04:19 | INFO  | Task 31da3325-8c89-4de7-84ee-4b84b7f78bf8 is in state SUCCESS 2026-03-09 01:04:19.879907 | orchestrator | 2026-03-09 01:04:19.879939 | orchestrator | 2026-03-09 01:04:19.879944 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-09 01:04:19.879949 | orchestrator | 2026-03-09 01:04:19.880106 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-09 01:04:19.880117 | orchestrator | Monday 09 March 2026 01:03:31 +0000 (0:00:00.177) 0:00:00.177 ********** 2026-03-09 01:04:19.880124 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-09 01:04:19.880155 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:04:19.880163 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:04:19.880169 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:04:19.880176 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:04:19.880183 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-09 01:04:19.880190 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-09 01:04:19.880197 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-09 01:04:19.880203 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-09 01:04:19.880210 | orchestrator | 2026-03-09 01:04:19.880217 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-09 01:04:19.880223 | orchestrator | Monday 09 March 2026 01:03:36 +0000 (0:00:04.887) 0:00:05.064 ********** 2026-03-09 01:04:19.880230 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-09 01:04:19.880237 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:04:19.880243 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:04:19.880249 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:04:19.880256 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:04:19.880263 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-09 01:04:19.880270 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-09 01:04:19.880276 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-09 01:04:19.880281 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-09 01:04:19.880285 | orchestrator | 2026-03-09 01:04:19.880289 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-09 01:04:19.880293 | orchestrator | Monday 09 March 2026 01:03:41 +0000 (0:00:04.486) 0:00:09.551 ********** 2026-03-09 01:04:19.880298 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-09 01:04:19.880302 | orchestrator | 2026-03-09 01:04:19.880306 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-09 01:04:19.880310 | orchestrator | Monday 09 March 2026 01:03:42 +0000 (0:00:01.070) 0:00:10.621 ********** 2026-03-09 01:04:19.880314 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-09 01:04:19.880318 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-09 01:04:19.880322 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-09 01:04:19.880326 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:04:19.880329 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-09 01:04:19.880333 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-09 01:04:19.880337 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-09 01:04:19.880341 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-09 01:04:19.880344 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-09 01:04:19.880352 | orchestrator | 2026-03-09 01:04:19.880356 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-09 01:04:19.880359 | orchestrator | Monday 09 March 2026 01:03:57 +0000 (0:00:15.268) 0:00:25.890 ********** 2026-03-09 01:04:19.880363 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-09 01:04:19.880376 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-09 01:04:19.880380 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-09 01:04:19.880384 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-09 01:04:19.880397 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-09 01:04:19.880401 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-09 01:04:19.880405 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-09 01:04:19.880409 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-09 01:04:19.880412 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-09 01:04:19.880416 | orchestrator | 2026-03-09 01:04:19.880420 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-09 01:04:19.880424 | orchestrator | Monday 09 March 2026 01:04:01 +0000 (0:00:03.394) 0:00:29.285 ********** 2026-03-09 01:04:19.880428 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-09 01:04:19.880432 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-09 01:04:19.880436 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-09 01:04:19.880440 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:04:19.880444 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-09 01:04:19.880448 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-09 01:04:19.880572 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-09 01:04:19.880578 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-09 01:04:19.880582 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-09 01:04:19.880586 | orchestrator | 2026-03-09 01:04:19.880590 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:04:19.880594 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:04:19.880599 | orchestrator | 2026-03-09 01:04:19.880603 | orchestrator | 2026-03-09 01:04:19.880607 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:04:19.880611 | orchestrator | Monday 09 March 2026 01:04:08 +0000 (0:00:07.650) 0:00:36.935 ********** 2026-03-09 01:04:19.880615 | orchestrator | =============================================================================== 2026-03-09 01:04:19.880619 | orchestrator | Write ceph keys to the share directory --------------------------------- 15.27s 2026-03-09 01:04:19.880623 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.65s 2026-03-09 01:04:19.880626 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.89s 2026-03-09 01:04:19.880630 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.49s 2026-03-09 01:04:19.880634 | orchestrator | Check if target directories exist --------------------------------------- 3.39s 2026-03-09 01:04:19.880638 | orchestrator | Create share directory -------------------------------------------------- 1.07s 2026-03-09 01:04:19.880646 | orchestrator | 2026-03-09 01:04:19.880659 | orchestrator | 2026-03-09 01:04:19.880663 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:04:19.880667 | orchestrator | 2026-03-09 01:04:19.880671 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:04:19.880680 | orchestrator | Monday 09 March 2026 01:02:31 +0000 (0:00:00.286) 0:00:00.286 ********** 2026-03-09 01:04:19.880687 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:19.880694 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:19.880700 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:19.880707 | orchestrator | 2026-03-09 01:04:19.880713 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:04:19.880719 | orchestrator | Monday 09 March 2026 01:02:31 +0000 (0:00:00.408) 0:00:00.694 ********** 2026-03-09 01:04:19.880726 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-09 01:04:19.880730 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-09 01:04:19.880734 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-09 01:04:19.880738 | orchestrator | 2026-03-09 01:04:19.880742 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-09 01:04:19.880745 | orchestrator | 2026-03-09 01:04:19.880749 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-09 01:04:19.880753 | orchestrator | Monday 09 March 2026 01:02:32 +0000 (0:00:00.472) 0:00:01.167 ********** 2026-03-09 01:04:19.880757 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:04:19.880761 | orchestrator | 2026-03-09 01:04:19.880764 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-09 01:04:19.880768 | orchestrator | Monday 09 March 2026 01:02:33 +0000 (0:00:00.609) 0:00:01.777 ********** 2026-03-09 01:04:19.880787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:04:19.880801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:04:19.880810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:04:19.880818 | orchestrator | 2026-03-09 01:04:19.880822 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-09 01:04:19.880826 | orchestrator | Monday 09 March 2026 01:02:34 +0000 (0:00:01.281) 0:00:03.059 ********** 2026-03-09 01:04:19.880830 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:19.880834 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:19.880838 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:19.880842 | orchestrator | 2026-03-09 01:04:19.880846 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-09 01:04:19.880850 | orchestrator | Monday 09 March 2026 01:02:34 +0000 (0:00:00.532) 0:00:03.591 ********** 2026-03-09 01:04:19.880854 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-09 01:04:19.880857 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-09 01:04:19.880861 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-09 01:04:19.880865 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-09 01:04:19.880869 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-09 01:04:19.880873 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-09 01:04:19.880877 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-09 01:04:19.880880 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-09 01:04:19.880884 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-09 01:04:19.880888 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-09 01:04:19.880892 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-09 01:04:19.880895 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-09 01:04:19.880899 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-09 01:04:19.880903 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-09 01:04:19.880907 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-09 01:04:19.880910 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-09 01:04:19.880914 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-09 01:04:19.880920 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-09 01:04:19.880924 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-09 01:04:19.880928 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-09 01:04:19.880932 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-09 01:04:19.880938 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-09 01:04:19.880942 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-09 01:04:19.880946 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-09 01:04:19.880950 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-09 01:04:19.880955 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-09 01:04:19.880962 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-09 01:04:19.880966 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-09 01:04:19.880970 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-09 01:04:19.880974 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-09 01:04:19.880978 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-09 01:04:19.880981 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-09 01:04:19.880985 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-09 01:04:19.880989 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-09 01:04:19.880993 | orchestrator | 2026-03-09 01:04:19.880997 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:04:19.881001 | orchestrator | Monday 09 March 2026 01:02:35 +0000 (0:00:00.756) 0:00:04.347 ********** 2026-03-09 01:04:19.881004 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:19.881008 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:19.881012 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:19.881016 | orchestrator | 2026-03-09 01:04:19.881020 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:04:19.881023 | orchestrator | Monday 09 March 2026 01:02:35 +0000 (0:00:00.324) 0:00:04.672 ********** 2026-03-09 01:04:19.881028 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881045 | orchestrator | 2026-03-09 01:04:19.881050 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:04:19.881053 | orchestrator | Monday 09 March 2026 01:02:36 +0000 (0:00:00.145) 0:00:04.818 ********** 2026-03-09 01:04:19.881057 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881061 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:19.881065 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:19.881068 | orchestrator | 2026-03-09 01:04:19.881072 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:04:19.881076 | orchestrator | Monday 09 March 2026 01:02:36 +0000 (0:00:00.544) 0:00:05.362 ********** 2026-03-09 01:04:19.881080 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:19.881084 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:19.881087 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:19.881091 | orchestrator | 2026-03-09 01:04:19.881095 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:04:19.881099 | orchestrator | Monday 09 March 2026 01:02:36 +0000 (0:00:00.323) 0:00:05.685 ********** 2026-03-09 01:04:19.881103 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881106 | orchestrator | 2026-03-09 01:04:19.881110 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:04:19.881114 | orchestrator | Monday 09 March 2026 01:02:37 +0000 (0:00:00.132) 0:00:05.817 ********** 2026-03-09 01:04:19.881118 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881121 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:19.881125 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:19.881129 | orchestrator | 2026-03-09 01:04:19.881133 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:04:19.881140 | orchestrator | Monday 09 March 2026 01:02:37 +0000 (0:00:00.362) 0:00:06.180 ********** 2026-03-09 01:04:19.881144 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:19.881147 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:19.881151 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:19.881155 | orchestrator | 2026-03-09 01:04:19.881159 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:04:19.881165 | orchestrator | Monday 09 March 2026 01:02:37 +0000 (0:00:00.335) 0:00:06.516 ********** 2026-03-09 01:04:19.881169 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881173 | orchestrator | 2026-03-09 01:04:19.881177 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:04:19.881181 | orchestrator | Monday 09 March 2026 01:02:38 +0000 (0:00:00.398) 0:00:06.915 ********** 2026-03-09 01:04:19.881185 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881188 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:19.881192 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:19.881196 | orchestrator | 2026-03-09 01:04:19.881203 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:04:19.881208 | orchestrator | Monday 09 March 2026 01:02:38 +0000 (0:00:00.339) 0:00:07.255 ********** 2026-03-09 01:04:19.881213 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:19.881218 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:19.881223 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:19.881227 | orchestrator | 2026-03-09 01:04:19.881232 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:04:19.881236 | orchestrator | Monday 09 March 2026 01:02:38 +0000 (0:00:00.351) 0:00:07.606 ********** 2026-03-09 01:04:19.881241 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881245 | orchestrator | 2026-03-09 01:04:19.881249 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:04:19.881254 | orchestrator | Monday 09 March 2026 01:02:39 +0000 (0:00:00.151) 0:00:07.757 ********** 2026-03-09 01:04:19.881258 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881263 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:19.881267 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:19.881272 | orchestrator | 2026-03-09 01:04:19.881276 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:04:19.881281 | orchestrator | Monday 09 March 2026 01:02:39 +0000 (0:00:00.323) 0:00:08.081 ********** 2026-03-09 01:04:19.881286 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:19.881290 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:19.881294 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:19.881299 | orchestrator | 2026-03-09 01:04:19.881304 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:04:19.881308 | orchestrator | Monday 09 March 2026 01:02:39 +0000 (0:00:00.634) 0:00:08.715 ********** 2026-03-09 01:04:19.881313 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881317 | orchestrator | 2026-03-09 01:04:19.881322 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:04:19.881326 | orchestrator | Monday 09 March 2026 01:02:40 +0000 (0:00:00.162) 0:00:08.878 ********** 2026-03-09 01:04:19.881330 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881335 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:19.881340 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:19.881344 | orchestrator | 2026-03-09 01:04:19.881348 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:04:19.881353 | orchestrator | Monday 09 March 2026 01:02:40 +0000 (0:00:00.374) 0:00:09.253 ********** 2026-03-09 01:04:19.881357 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:19.881362 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:19.881366 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:19.881371 | orchestrator | 2026-03-09 01:04:19.881375 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:04:19.881383 | orchestrator | Monday 09 March 2026 01:02:40 +0000 (0:00:00.357) 0:00:09.611 ********** 2026-03-09 01:04:19.881388 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881392 | orchestrator | 2026-03-09 01:04:19.881397 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:04:19.881401 | orchestrator | Monday 09 March 2026 01:02:41 +0000 (0:00:00.131) 0:00:09.743 ********** 2026-03-09 01:04:19.881406 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881410 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:19.881415 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:19.881419 | orchestrator | 2026-03-09 01:04:19.881424 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:04:19.881428 | orchestrator | Monday 09 March 2026 01:02:41 +0000 (0:00:00.302) 0:00:10.045 ********** 2026-03-09 01:04:19.881433 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:19.881437 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:19.881442 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:19.881446 | orchestrator | 2026-03-09 01:04:19.881450 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:04:19.881454 | orchestrator | Monday 09 March 2026 01:02:41 +0000 (0:00:00.622) 0:00:10.668 ********** 2026-03-09 01:04:19.881458 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881461 | orchestrator | 2026-03-09 01:04:19.881466 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:04:19.881469 | orchestrator | Monday 09 March 2026 01:02:42 +0000 (0:00:00.149) 0:00:10.818 ********** 2026-03-09 01:04:19.881473 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881477 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:19.881481 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:19.881485 | orchestrator | 2026-03-09 01:04:19.881489 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:04:19.881493 | orchestrator | Monday 09 March 2026 01:02:42 +0000 (0:00:00.316) 0:00:11.134 ********** 2026-03-09 01:04:19.881496 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:19.881500 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:19.881504 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:19.881508 | orchestrator | 2026-03-09 01:04:19.881512 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:04:19.881515 | orchestrator | Monday 09 March 2026 01:02:42 +0000 (0:00:00.349) 0:00:11.484 ********** 2026-03-09 01:04:19.881519 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881523 | orchestrator | 2026-03-09 01:04:19.881527 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:04:19.881531 | orchestrator | Monday 09 March 2026 01:02:42 +0000 (0:00:00.155) 0:00:11.639 ********** 2026-03-09 01:04:19.881534 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881538 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:19.881542 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:19.881546 | orchestrator | 2026-03-09 01:04:19.881552 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:04:19.881556 | orchestrator | Monday 09 March 2026 01:02:43 +0000 (0:00:00.591) 0:00:12.231 ********** 2026-03-09 01:04:19.881560 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:19.881564 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:19.881568 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:19.881572 | orchestrator | 2026-03-09 01:04:19.881575 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:04:19.881579 | orchestrator | Monday 09 March 2026 01:02:43 +0000 (0:00:00.462) 0:00:12.693 ********** 2026-03-09 01:04:19.881585 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881589 | orchestrator | 2026-03-09 01:04:19.881593 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:04:19.881597 | orchestrator | Monday 09 March 2026 01:02:44 +0000 (0:00:00.157) 0:00:12.851 ********** 2026-03-09 01:04:19.881601 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881614 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:19.881618 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:19.881622 | orchestrator | 2026-03-09 01:04:19.881625 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:04:19.881629 | orchestrator | Monday 09 March 2026 01:02:44 +0000 (0:00:00.349) 0:00:13.200 ********** 2026-03-09 01:04:19.881633 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:04:19.881637 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:04:19.881641 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:04:19.881644 | orchestrator | 2026-03-09 01:04:19.881648 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:04:19.881652 | orchestrator | Monday 09 March 2026 01:02:44 +0000 (0:00:00.350) 0:00:13.551 ********** 2026-03-09 01:04:19.881656 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881660 | orchestrator | 2026-03-09 01:04:19.881663 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:04:19.881667 | orchestrator | Monday 09 March 2026 01:02:44 +0000 (0:00:00.139) 0:00:13.690 ********** 2026-03-09 01:04:19.881671 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881675 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:19.881678 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:19.881682 | orchestrator | 2026-03-09 01:04:19.881686 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-09 01:04:19.881690 | orchestrator | Monday 09 March 2026 01:02:45 +0000 (0:00:00.631) 0:00:14.322 ********** 2026-03-09 01:04:19.881694 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:04:19.881697 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:04:19.881701 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:04:19.881705 | orchestrator | 2026-03-09 01:04:19.881709 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-09 01:04:19.881712 | orchestrator | Monday 09 March 2026 01:02:47 +0000 (0:00:01.971) 0:00:16.293 ********** 2026-03-09 01:04:19.881716 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-09 01:04:19.881720 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-09 01:04:19.881723 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-09 01:04:19.881727 | orchestrator | 2026-03-09 01:04:19.881731 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-09 01:04:19.881735 | orchestrator | Monday 09 March 2026 01:02:49 +0000 (0:00:01.750) 0:00:18.044 ********** 2026-03-09 01:04:19.881739 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-09 01:04:19.881742 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-09 01:04:19.881746 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-09 01:04:19.881750 | orchestrator | 2026-03-09 01:04:19.881754 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-09 01:04:19.881758 | orchestrator | Monday 09 March 2026 01:02:51 +0000 (0:00:02.421) 0:00:20.465 ********** 2026-03-09 01:04:19.881761 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-09 01:04:19.881765 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-09 01:04:19.881769 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-09 01:04:19.881773 | orchestrator | 2026-03-09 01:04:19.881776 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-09 01:04:19.881780 | orchestrator | Monday 09 March 2026 01:02:53 +0000 (0:00:02.247) 0:00:22.713 ********** 2026-03-09 01:04:19.881784 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881788 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:19.881794 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:19.881798 | orchestrator | 2026-03-09 01:04:19.881802 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-09 01:04:19.881806 | orchestrator | Monday 09 March 2026 01:02:54 +0000 (0:00:00.350) 0:00:23.063 ********** 2026-03-09 01:04:19.881809 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881813 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:19.881817 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:19.881821 | orchestrator | 2026-03-09 01:04:19.881824 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-09 01:04:19.881828 | orchestrator | Monday 09 March 2026 01:02:54 +0000 (0:00:00.375) 0:00:23.438 ********** 2026-03-09 01:04:19.881832 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:04:19.881836 | orchestrator | 2026-03-09 01:04:19.881840 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-09 01:04:19.881846 | orchestrator | Monday 09 March 2026 01:02:55 +0000 (0:00:00.951) 0:00:24.389 ********** 2026-03-09 01:04:19.881854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:04:19.881859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:04:19.881895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:04:19.881903 | orchestrator | 2026-03-09 01:04:19.881909 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-09 01:04:19.881932 | orchestrator | Monday 09 March 2026 01:02:57 +0000 (0:00:01.681) 0:00:26.071 ********** 2026-03-09 01:04:19.881946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:04:19.881959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:04:19.881971 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.881977 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:19.881992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:04:19.882000 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:19.882006 | orchestrator | 2026-03-09 01:04:19.882013 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-09 01:04:19.882081 | orchestrator | Monday 09 March 2026 01:02:58 +0000 (0:00:00.794) 0:00:26.865 ********** 2026-03-09 01:04:19.882086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:04:19.882096 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.882107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:04:19.882112 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:19.882117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:04:19.882124 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:19.882128 | orchestrator | 2026-03-09 01:04:19.882132 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-09 01:04:19.882136 | orchestrator | Monday 09 March 2026 01:02:59 +0000 (0:00:00.970) 0:00:27.836 ********** 2026-03-09 01:04:19.882148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:04:19.882156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:04:19.882169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:04:19.882179 | orchestrator | 2026-03-09 01:04:19.882183 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-09 01:04:19.882187 | orchestrator | Monday 09 March 2026 01:03:01 +0000 (0:00:02.001) 0:00:29.838 ********** 2026-03-09 01:04:19.882191 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:04:19.882195 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:04:19.882198 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:04:19.882202 | orchestrator | 2026-03-09 01:04:19.882206 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-09 01:04:19.882210 | orchestrator | Monday 09 March 2026 01:03:01 +0000 (0:00:00.339) 0:00:30.178 ********** 2026-03-09 01:04:19.882214 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:04:19.882218 | orchestrator | 2026-03-09 01:04:19.882221 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-09 01:04:19.882225 | orchestrator | Monday 09 March 2026 01:03:02 +0000 (0:00:00.810) 0:00:30.989 ********** 2026-03-09 01:04:19.882229 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:04:19.882233 | orchestrator | 2026-03-09 01:04:19.882237 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-09 01:04:19.882241 | orchestrator | Monday 09 March 2026 01:03:05 +0000 (0:00:02.761) 0:00:33.750 ********** 2026-03-09 01:04:19.882245 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:04:19.882249 | orchestrator | 2026-03-09 01:04:19.882253 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-09 01:04:19.882256 | orchestrator | Monday 09 March 2026 01:03:07 +0000 (0:00:02.982) 0:00:36.733 ********** 2026-03-09 01:04:19.882260 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:04:19.882264 | orchestrator | 2026-03-09 01:04:19.882268 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-09 01:04:19.882272 | orchestrator | Monday 09 March 2026 01:03:24 +0000 (0:00:16.915) 0:00:53.649 ********** 2026-03-09 01:04:19.882276 | orchestrator | 2026-03-09 01:04:19.882280 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-09 01:04:19.882283 | orchestrator | Monday 09 March 2026 01:03:24 +0000 (0:00:00.062) 0:00:53.711 ********** 2026-03-09 01:04:19.882287 | orchestrator | 2026-03-09 01:04:19.882291 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-09 01:04:19.882295 | orchestrator | Monday 09 March 2026 01:03:25 +0000 (0:00:00.063) 0:00:53.775 ********** 2026-03-09 01:04:19.882299 | orchestrator | 2026-03-09 01:04:19.882304 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-09 01:04:19.882310 | orchestrator | Monday 09 March 2026 01:03:25 +0000 (0:00:00.079) 0:00:53.854 ********** 2026-03-09 01:04:19.882316 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:04:19.882325 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:04:19.882332 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:04:19.882337 | orchestrator | 2026-03-09 01:04:19.882344 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:04:19.882351 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-09 01:04:19.882362 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-09 01:04:19.882368 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-09 01:04:19.882373 | orchestrator | 2026-03-09 01:04:19.882379 | orchestrator | 2026-03-09 01:04:19.882385 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:04:19.882391 | orchestrator | Monday 09 March 2026 01:04:17 +0000 (0:00:52.856) 0:01:46.711 ********** 2026-03-09 01:04:19.882402 | orchestrator | =============================================================================== 2026-03-09 01:04:19.882408 | orchestrator | horizon : Restart horizon container ------------------------------------ 52.86s 2026-03-09 01:04:19.882415 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.92s 2026-03-09 01:04:19.882421 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.98s 2026-03-09 01:04:19.882427 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.76s 2026-03-09 01:04:19.882433 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.42s 2026-03-09 01:04:19.882440 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.25s 2026-03-09 01:04:19.882444 | orchestrator | horizon : Deploy horizon container -------------------------------------- 2.00s 2026-03-09 01:04:19.882448 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.97s 2026-03-09 01:04:19.882452 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.75s 2026-03-09 01:04:19.882456 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.68s 2026-03-09 01:04:19.882460 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.28s 2026-03-09 01:04:19.882463 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.97s 2026-03-09 01:04:19.882467 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.95s 2026-03-09 01:04:19.882472 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-03-09 01:04:19.882478 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.79s 2026-03-09 01:04:19.882484 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2026-03-09 01:04:19.882490 | orchestrator | horizon : Update policy file name --------------------------------------- 0.63s 2026-03-09 01:04:19.882500 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.63s 2026-03-09 01:04:19.882506 | orchestrator | horizon : Update policy file name --------------------------------------- 0.62s 2026-03-09 01:04:19.882512 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.61s 2026-03-09 01:04:19.882518 | orchestrator | 2026-03-09 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:22.926835 | orchestrator | 2026-03-09 01:04:22 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:22.928375 | orchestrator | 2026-03-09 01:04:22 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:22.928428 | orchestrator | 2026-03-09 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:25.986197 | orchestrator | 2026-03-09 01:04:25 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:25.986311 | orchestrator | 2026-03-09 01:04:25 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:25.986325 | orchestrator | 2026-03-09 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:29.035718 | orchestrator | 2026-03-09 01:04:29 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:29.037548 | orchestrator | 2026-03-09 01:04:29 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:29.037590 | orchestrator | 2026-03-09 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:32.086458 | orchestrator | 2026-03-09 01:04:32 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:32.088831 | orchestrator | 2026-03-09 01:04:32 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:32.088861 | orchestrator | 2026-03-09 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:35.146246 | orchestrator | 2026-03-09 01:04:35 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:35.148041 | orchestrator | 2026-03-09 01:04:35 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:35.148128 | orchestrator | 2026-03-09 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:38.193133 | orchestrator | 2026-03-09 01:04:38 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:38.194090 | orchestrator | 2026-03-09 01:04:38 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:38.194124 | orchestrator | 2026-03-09 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:41.239613 | orchestrator | 2026-03-09 01:04:41 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:41.244578 | orchestrator | 2026-03-09 01:04:41 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:41.244640 | orchestrator | 2026-03-09 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:44.291178 | orchestrator | 2026-03-09 01:04:44 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:44.293348 | orchestrator | 2026-03-09 01:04:44 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:44.293422 | orchestrator | 2026-03-09 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:47.331755 | orchestrator | 2026-03-09 01:04:47 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:47.333950 | orchestrator | 2026-03-09 01:04:47 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:47.334009 | orchestrator | 2026-03-09 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:50.380939 | orchestrator | 2026-03-09 01:04:50 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:50.385346 | orchestrator | 2026-03-09 01:04:50 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:50.385411 | orchestrator | 2026-03-09 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:53.428766 | orchestrator | 2026-03-09 01:04:53 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:53.431517 | orchestrator | 2026-03-09 01:04:53 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:53.431576 | orchestrator | 2026-03-09 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:56.477897 | orchestrator | 2026-03-09 01:04:56 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:56.480476 | orchestrator | 2026-03-09 01:04:56 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:56.480616 | orchestrator | 2026-03-09 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:59.531699 | orchestrator | 2026-03-09 01:04:59 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:04:59.532736 | orchestrator | 2026-03-09 01:04:59 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:04:59.532781 | orchestrator | 2026-03-09 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:02.572805 | orchestrator | 2026-03-09 01:05:02 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:05:02.573457 | orchestrator | 2026-03-09 01:05:02 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:05:02.573541 | orchestrator | 2026-03-09 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:05.624729 | orchestrator | 2026-03-09 01:05:05 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:05:05.626406 | orchestrator | 2026-03-09 01:05:05 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:05:05.626450 | orchestrator | 2026-03-09 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:08.674905 | orchestrator | 2026-03-09 01:05:08 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:05:08.676673 | orchestrator | 2026-03-09 01:05:08 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:05:08.676746 | orchestrator | 2026-03-09 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:11.722884 | orchestrator | 2026-03-09 01:05:11 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state STARTED 2026-03-09 01:05:11.724211 | orchestrator | 2026-03-09 01:05:11 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:05:11.724418 | orchestrator | 2026-03-09 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:14.771518 | orchestrator | 2026-03-09 01:05:14 | INFO  | Task bdd56998-e8bc-48e7-b560-99d3897e52ee is in state SUCCESS 2026-03-09 01:05:14.772190 | orchestrator | 2026-03-09 01:05:14 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:05:14.773998 | orchestrator | 2026-03-09 01:05:14 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:05:14.774686 | orchestrator | 2026-03-09 01:05:14 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:05:14.775781 | orchestrator | 2026-03-09 01:05:14 | INFO  | Task 0a5163a5-a500-4084-9cfc-74de821212d1 is in state STARTED 2026-03-09 01:05:14.775828 | orchestrator | 2026-03-09 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:17.841682 | orchestrator | 2026-03-09 01:05:17 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:05:17.843742 | orchestrator | 2026-03-09 01:05:17 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:05:17.846592 | orchestrator | 2026-03-09 01:05:17 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:05:17.848415 | orchestrator | 2026-03-09 01:05:17 | INFO  | Task 0a5163a5-a500-4084-9cfc-74de821212d1 is in state STARTED 2026-03-09 01:05:17.848600 | orchestrator | 2026-03-09 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:20.894457 | orchestrator | 2026-03-09 01:05:20 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:05:20.896565 | orchestrator | 2026-03-09 01:05:20 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:05:20.897300 | orchestrator | 2026-03-09 01:05:20 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:05:20.898092 | orchestrator | 2026-03-09 01:05:20 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:05:20.901961 | orchestrator | 2026-03-09 01:05:20 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:05:20.902348 | orchestrator | 2026-03-09 01:05:20 | INFO  | Task 0a5163a5-a500-4084-9cfc-74de821212d1 is in state SUCCESS 2026-03-09 01:05:20.902422 | orchestrator | 2026-03-09 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:23.958540 | orchestrator | 2026-03-09 01:05:23 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:05:23.958649 | orchestrator | 2026-03-09 01:05:23 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:05:23.958661 | orchestrator | 2026-03-09 01:05:23 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:05:23.958669 | orchestrator | 2026-03-09 01:05:23 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:05:23.958677 | orchestrator | 2026-03-09 01:05:23 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:05:23.958685 | orchestrator | 2026-03-09 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:26.998714 | orchestrator | 2026-03-09 01:05:26 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:05:27.000303 | orchestrator | 2026-03-09 01:05:26 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:05:27.002436 | orchestrator | 2026-03-09 01:05:27 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:05:27.003373 | orchestrator | 2026-03-09 01:05:27 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:05:27.004736 | orchestrator | 2026-03-09 01:05:27 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:05:27.004809 | orchestrator | 2026-03-09 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:30.062721 | orchestrator | 2026-03-09 01:05:30 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:05:30.063058 | orchestrator | 2026-03-09 01:05:30 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:05:30.067402 | orchestrator | 2026-03-09 01:05:30 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state STARTED 2026-03-09 01:05:30.074642 | orchestrator | 2026-03-09 01:05:30 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:05:30.075198 | orchestrator | 2026-03-09 01:05:30 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:05:30.075223 | orchestrator | 2026-03-09 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:33.141582 | orchestrator | 2026-03-09 01:05:33 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:05:33.180594 | orchestrator | 2026-03-09 01:05:33 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:05:33.182244 | orchestrator | 2026-03-09 01:05:33 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:05:33.184650 | orchestrator | 2026-03-09 01:05:33 | INFO  | Task 7d2e6950-a7e2-48c8-90a4-24a1d67e1c0b is in state SUCCESS 2026-03-09 01:05:33.185817 | orchestrator | 2026-03-09 01:05:33.185864 | orchestrator | 2026-03-09 01:05:33.185882 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-09 01:05:33.186482 | orchestrator | 2026-03-09 01:05:33.186499 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-09 01:05:33.186508 | orchestrator | Monday 09 March 2026 01:04:13 +0000 (0:00:00.248) 0:00:00.248 ********** 2026-03-09 01:05:33.186517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-09 01:05:33.186527 | orchestrator | 2026-03-09 01:05:33.186535 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-09 01:05:33.186543 | orchestrator | Monday 09 March 2026 01:04:14 +0000 (0:00:00.262) 0:00:00.510 ********** 2026-03-09 01:05:33.186552 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-09 01:05:33.186583 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-09 01:05:33.186592 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-09 01:05:33.186600 | orchestrator | 2026-03-09 01:05:33.186608 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-09 01:05:33.186616 | orchestrator | Monday 09 March 2026 01:04:15 +0000 (0:00:01.489) 0:00:01.999 ********** 2026-03-09 01:05:33.186625 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-09 01:05:33.186632 | orchestrator | 2026-03-09 01:05:33.186640 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-09 01:05:33.186648 | orchestrator | Monday 09 March 2026 01:04:17 +0000 (0:00:01.583) 0:00:03.583 ********** 2026-03-09 01:05:33.186656 | orchestrator | changed: [testbed-manager] 2026-03-09 01:05:33.186665 | orchestrator | 2026-03-09 01:05:33.186673 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-09 01:05:33.186681 | orchestrator | Monday 09 March 2026 01:04:18 +0000 (0:00:01.051) 0:00:04.634 ********** 2026-03-09 01:05:33.186688 | orchestrator | changed: [testbed-manager] 2026-03-09 01:05:33.186696 | orchestrator | 2026-03-09 01:05:33.186704 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-09 01:05:33.186712 | orchestrator | Monday 09 March 2026 01:04:19 +0000 (0:00:00.971) 0:00:05.605 ********** 2026-03-09 01:05:33.186720 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-09 01:05:33.186728 | orchestrator | ok: [testbed-manager] 2026-03-09 01:05:33.186736 | orchestrator | 2026-03-09 01:05:33.186744 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-09 01:05:33.186752 | orchestrator | Monday 09 March 2026 01:05:01 +0000 (0:00:41.916) 0:00:47.522 ********** 2026-03-09 01:05:33.186760 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-09 01:05:33.186768 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-09 01:05:33.186775 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-09 01:05:33.186783 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-09 01:05:33.186791 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-09 01:05:33.186799 | orchestrator | 2026-03-09 01:05:33.186807 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-09 01:05:33.186815 | orchestrator | Monday 09 March 2026 01:05:05 +0000 (0:00:04.542) 0:00:52.065 ********** 2026-03-09 01:05:33.186822 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-09 01:05:33.186830 | orchestrator | 2026-03-09 01:05:33.186839 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-09 01:05:33.186847 | orchestrator | Monday 09 March 2026 01:05:06 +0000 (0:00:00.532) 0:00:52.597 ********** 2026-03-09 01:05:33.186854 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:05:33.186862 | orchestrator | 2026-03-09 01:05:33.186870 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-09 01:05:33.186878 | orchestrator | Monday 09 March 2026 01:05:06 +0000 (0:00:00.168) 0:00:52.765 ********** 2026-03-09 01:05:33.186885 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:05:33.186893 | orchestrator | 2026-03-09 01:05:33.186906 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-09 01:05:33.186919 | orchestrator | Monday 09 March 2026 01:05:06 +0000 (0:00:00.552) 0:00:53.318 ********** 2026-03-09 01:05:33.186932 | orchestrator | changed: [testbed-manager] 2026-03-09 01:05:33.186945 | orchestrator | 2026-03-09 01:05:33.186959 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-09 01:05:33.187156 | orchestrator | Monday 09 March 2026 01:05:08 +0000 (0:00:01.537) 0:00:54.855 ********** 2026-03-09 01:05:33.187173 | orchestrator | changed: [testbed-manager] 2026-03-09 01:05:33.187188 | orchestrator | 2026-03-09 01:05:33.187202 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-09 01:05:33.187214 | orchestrator | Monday 09 March 2026 01:05:09 +0000 (0:00:00.797) 0:00:55.653 ********** 2026-03-09 01:05:33.187244 | orchestrator | changed: [testbed-manager] 2026-03-09 01:05:33.187259 | orchestrator | 2026-03-09 01:05:33.187283 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-09 01:05:33.187291 | orchestrator | Monday 09 March 2026 01:05:09 +0000 (0:00:00.635) 0:00:56.288 ********** 2026-03-09 01:05:33.187300 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-09 01:05:33.187314 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-09 01:05:33.187327 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-09 01:05:33.187341 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-09 01:05:33.187354 | orchestrator | 2026-03-09 01:05:33.187367 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:05:33.187382 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 01:05:33.187396 | orchestrator | 2026-03-09 01:05:33.187408 | orchestrator | 2026-03-09 01:05:33.187458 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:05:33.187469 | orchestrator | Monday 09 March 2026 01:05:11 +0000 (0:00:01.653) 0:00:57.942 ********** 2026-03-09 01:05:33.187478 | orchestrator | =============================================================================== 2026-03-09 01:05:33.187486 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.92s 2026-03-09 01:05:33.187493 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.54s 2026-03-09 01:05:33.187501 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.65s 2026-03-09 01:05:33.187509 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.58s 2026-03-09 01:05:33.187517 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.54s 2026-03-09 01:05:33.187525 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.49s 2026-03-09 01:05:33.187533 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.05s 2026-03-09 01:05:33.187541 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.97s 2026-03-09 01:05:33.187548 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.80s 2026-03-09 01:05:33.187556 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2026-03-09 01:05:33.187564 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.55s 2026-03-09 01:05:33.187572 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.53s 2026-03-09 01:05:33.187580 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2026-03-09 01:05:33.187587 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.17s 2026-03-09 01:05:33.187595 | orchestrator | 2026-03-09 01:05:33.187603 | orchestrator | 2026-03-09 01:05:33.187611 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:05:33.187619 | orchestrator | 2026-03-09 01:05:33.187627 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:05:33.187635 | orchestrator | Monday 09 March 2026 01:05:17 +0000 (0:00:00.222) 0:00:00.222 ********** 2026-03-09 01:05:33.187643 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:05:33.187651 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:05:33.187658 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:05:33.187666 | orchestrator | 2026-03-09 01:05:33.187674 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:05:33.187682 | orchestrator | Monday 09 March 2026 01:05:17 +0000 (0:00:00.327) 0:00:00.549 ********** 2026-03-09 01:05:33.187690 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-09 01:05:33.187698 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-09 01:05:33.187706 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-09 01:05:33.187721 | orchestrator | 2026-03-09 01:05:33.187730 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-09 01:05:33.187738 | orchestrator | 2026-03-09 01:05:33.187746 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-09 01:05:33.187753 | orchestrator | Monday 09 March 2026 01:05:18 +0000 (0:00:00.926) 0:00:01.476 ********** 2026-03-09 01:05:33.187761 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:05:33.187769 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:05:33.187777 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:05:33.187784 | orchestrator | 2026-03-09 01:05:33.187792 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:05:33.187801 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:05:33.187810 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:05:33.187818 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:05:33.187826 | orchestrator | 2026-03-09 01:05:33.187834 | orchestrator | 2026-03-09 01:05:33.187842 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:05:33.187850 | orchestrator | Monday 09 March 2026 01:05:19 +0000 (0:00:00.746) 0:00:02.222 ********** 2026-03-09 01:05:33.187858 | orchestrator | =============================================================================== 2026-03-09 01:05:33.187866 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2026-03-09 01:05:33.187874 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.75s 2026-03-09 01:05:33.187881 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-09 01:05:33.187889 | orchestrator | 2026-03-09 01:05:33.187897 | orchestrator | 2026-03-09 01:05:33.187905 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:05:33.187913 | orchestrator | 2026-03-09 01:05:33.187926 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:05:33.187934 | orchestrator | Monday 09 March 2026 01:02:31 +0000 (0:00:00.282) 0:00:00.283 ********** 2026-03-09 01:05:33.187942 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:05:33.187950 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:05:33.187958 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:05:33.187966 | orchestrator | 2026-03-09 01:05:33.187974 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:05:33.187982 | orchestrator | Monday 09 March 2026 01:02:31 +0000 (0:00:00.328) 0:00:00.611 ********** 2026-03-09 01:05:33.187989 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-09 01:05:33.187997 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-09 01:05:33.188005 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-09 01:05:33.188013 | orchestrator | 2026-03-09 01:05:33.188021 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-09 01:05:33.188029 | orchestrator | 2026-03-09 01:05:33.188062 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:05:33.188072 | orchestrator | Monday 09 March 2026 01:02:32 +0000 (0:00:00.626) 0:00:01.238 ********** 2026-03-09 01:05:33.188079 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:05:33.188087 | orchestrator | 2026-03-09 01:05:33.188095 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-09 01:05:33.188103 | orchestrator | Monday 09 March 2026 01:02:32 +0000 (0:00:00.675) 0:00:01.913 ********** 2026-03-09 01:05:33.188117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:05:33.188165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:05:33.188182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:05:33.188220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:05:33.188232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:05:33.188247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:05:33.188256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:05:33.188264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:05:33.188273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:05:33.188284 | orchestrator | 2026-03-09 01:05:33.188299 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-09 01:05:33.188313 | orchestrator | Monday 09 March 2026 01:02:34 +0000 (0:00:01.815) 0:00:03.728 ********** 2026-03-09 01:05:33.188326 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.188340 | orchestrator | 2026-03-09 01:05:33.188360 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-09 01:05:33.188374 | orchestrator | Monday 09 March 2026 01:02:34 +0000 (0:00:00.149) 0:00:03.877 ********** 2026-03-09 01:05:33.188388 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.188400 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:33.188412 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:33.188423 | orchestrator | 2026-03-09 01:05:33.188435 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-09 01:05:33.188447 | orchestrator | Monday 09 March 2026 01:02:35 +0000 (0:00:00.509) 0:00:04.387 ********** 2026-03-09 01:05:33.188460 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:05:33.188473 | orchestrator | 2026-03-09 01:05:33.188487 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:05:33.188500 | orchestrator | Monday 09 March 2026 01:02:36 +0000 (0:00:00.902) 0:00:05.290 ********** 2026-03-09 01:05:33.188532 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:05:33.188547 | orchestrator | 2026-03-09 01:05:33.188561 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-09 01:05:33.188574 | orchestrator | Monday 09 March 2026 01:02:36 +0000 (0:00:00.598) 0:00:05.888 ********** 2026-03-09 01:05:33.188590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:05:33.188605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:05:33.188618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:05:33.188639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:05:33.188675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:05:33.188690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:05:33.188702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:05:33.188714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:05:33.188726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:05:33.188738 | orchestrator | 2026-03-09 01:05:33.188750 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-09 01:05:33.188762 | orchestrator | Monday 09 March 2026 01:02:40 +0000 (0:00:03.659) 0:00:09.547 ********** 2026-03-09 01:05:33.188790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:05:33.188817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:05:33.188831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:05:33.188846 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.188862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:05:33.188878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:05:33.188905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:05:33.188926 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:33.188950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:05:33.188964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:05:33.188977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:05:33.188991 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:33.189004 | orchestrator | 2026-03-09 01:05:33.189019 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-09 01:05:33.189032 | orchestrator | Monday 09 March 2026 01:02:41 +0000 (0:00:00.601) 0:00:10.149 ********** 2026-03-09 01:05:33.189046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:05:33.189077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:05:33.189101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:05:33.189115 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.189170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:05:33.189185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:05:33.189199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:05:33.189213 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:33.189231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:05:33.189263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:05:33.189279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:05:33.189293 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:33.189306 | orchestrator | 2026-03-09 01:05:33.189319 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-09 01:05:33.189333 | orchestrator | Monday 09 March 2026 01:02:42 +0000 (0:00:00.859) 0:00:11.008 ********** 2026-03-09 01:05:33.189346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:05:33.189362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:05:33.189397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:05:33.189413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:05:33.189427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:05:33.189439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:05:33.189453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:05:33.189475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:05:33.189493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:05:33.189506 | orchestrator | 2026-03-09 01:05:33.189518 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-09 01:05:33.189532 | orchestrator | Monday 09 March 2026 01:02:45 +0000 (0:00:03.630) 0:00:14.638 ********** 2026-03-09 01:05:33.189558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:05:33.189573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:05:33.189588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:05:33.189611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:05:33.189641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:05:33.189658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:05:33.189672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:05:33.189685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:05:33.189699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:05:33.189725 | orchestrator | 2026-03-09 01:05:33.189739 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-09 01:05:33.189753 | orchestrator | Monday 09 March 2026 01:02:51 +0000 (0:00:05.700) 0:00:20.339 ********** 2026-03-09 01:05:33.189767 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:33.189779 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:05:33.189793 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:05:33.189806 | orchestrator | 2026-03-09 01:05:33.189820 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-09 01:05:33.189834 | orchestrator | Monday 09 March 2026 01:02:52 +0000 (0:00:01.524) 0:00:21.863 ********** 2026-03-09 01:05:33.189848 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.189857 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:33.189870 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:33.189883 | orchestrator | 2026-03-09 01:05:33.189897 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-09 01:05:33.189910 | orchestrator | Monday 09 March 2026 01:02:53 +0000 (0:00:00.702) 0:00:22.566 ********** 2026-03-09 01:05:33.189921 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.189935 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:33.189955 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:33.189969 | orchestrator | 2026-03-09 01:05:33.189984 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-09 01:05:33.189998 | orchestrator | Monday 09 March 2026 01:02:53 +0000 (0:00:00.315) 0:00:22.881 ********** 2026-03-09 01:05:33.190010 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.190071 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:33.190085 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:33.190097 | orchestrator | 2026-03-09 01:05:33.190105 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-09 01:05:33.190117 | orchestrator | Monday 09 March 2026 01:02:54 +0000 (0:00:00.600) 0:00:23.482 ********** 2026-03-09 01:05:33.190170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:05:33.190187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:05:33.190214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:05:33.190229 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.190244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:05:33.190272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:05:33.190296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:05:33.190308 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:33.190322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:05:33.190344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:05:33.190358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:05:33.190372 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:33.190386 | orchestrator | 2026-03-09 01:05:33.190399 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:05:33.190413 | orchestrator | Monday 09 March 2026 01:02:55 +0000 (0:00:00.603) 0:00:24.086 ********** 2026-03-09 01:05:33.190427 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.190440 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:33.190452 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:33.190460 | orchestrator | 2026-03-09 01:05:33.190468 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-09 01:05:33.190479 | orchestrator | Monday 09 March 2026 01:02:55 +0000 (0:00:00.389) 0:00:24.475 ********** 2026-03-09 01:05:33.190494 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-09 01:05:33.190508 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-09 01:05:33.190521 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-09 01:05:33.190533 | orchestrator | 2026-03-09 01:05:33.190551 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-09 01:05:33.190564 | orchestrator | Monday 09 March 2026 01:02:57 +0000 (0:00:02.083) 0:00:26.558 ********** 2026-03-09 01:05:33.190579 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:05:33.190593 | orchestrator | 2026-03-09 01:05:33.190606 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-09 01:05:33.190619 | orchestrator | Monday 09 March 2026 01:02:58 +0000 (0:00:01.022) 0:00:27.581 ********** 2026-03-09 01:05:33.190632 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.190645 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:33.190659 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:33.190673 | orchestrator | 2026-03-09 01:05:33.190686 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-09 01:05:33.190699 | orchestrator | Monday 09 March 2026 01:02:59 +0000 (0:00:00.822) 0:00:28.404 ********** 2026-03-09 01:05:33.190713 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:05:33.190721 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-09 01:05:33.190729 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-09 01:05:33.190744 | orchestrator | 2026-03-09 01:05:33.190753 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-09 01:05:33.190761 | orchestrator | Monday 09 March 2026 01:03:00 +0000 (0:00:01.523) 0:00:29.928 ********** 2026-03-09 01:05:33.190769 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:05:33.190778 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:05:33.190786 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:05:33.190794 | orchestrator | 2026-03-09 01:05:33.190802 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-09 01:05:33.190810 | orchestrator | Monday 09 March 2026 01:03:01 +0000 (0:00:00.346) 0:00:30.274 ********** 2026-03-09 01:05:33.190818 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-09 01:05:33.190826 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-09 01:05:33.190833 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-09 01:05:33.190847 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-09 01:05:33.190860 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-09 01:05:33.190873 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-09 01:05:33.190887 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-09 01:05:33.190902 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-09 01:05:33.190916 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-09 01:05:33.190931 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-09 01:05:33.190945 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-09 01:05:33.190959 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-09 01:05:33.190972 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-09 01:05:33.190986 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-09 01:05:33.190995 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-09 01:05:33.191004 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:05:33.191012 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:05:33.191020 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:05:33.191027 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:05:33.191036 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:05:33.191043 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:05:33.191051 | orchestrator | 2026-03-09 01:05:33.191059 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-09 01:05:33.191067 | orchestrator | Monday 09 March 2026 01:03:11 +0000 (0:00:09.715) 0:00:39.990 ********** 2026-03-09 01:05:33.191075 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:05:33.191086 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:05:33.191099 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:05:33.191110 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:05:33.191151 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:05:33.191168 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:05:33.191177 | orchestrator | 2026-03-09 01:05:33.191191 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-09 01:05:33.191199 | orchestrator | Monday 09 March 2026 01:03:14 +0000 (0:00:03.126) 0:00:43.116 ********** 2026-03-09 01:05:33.191216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:05:33.191226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:05:33.191236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:05:33.191246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:05:33.191265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:05:33.191279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:05:33.191288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:05:33.191297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:05:33.191306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:05:33.191314 | orchestrator | 2026-03-09 01:05:33.191322 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:05:33.191330 | orchestrator | Monday 09 March 2026 01:03:16 +0000 (0:00:02.545) 0:00:45.662 ********** 2026-03-09 01:05:33.191338 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.191347 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:33.191355 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:33.191363 | orchestrator | 2026-03-09 01:05:33.191371 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-09 01:05:33.191384 | orchestrator | Monday 09 March 2026 01:03:17 +0000 (0:00:00.356) 0:00:46.019 ********** 2026-03-09 01:05:33.191392 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:33.191400 | orchestrator | 2026-03-09 01:05:33.191408 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-09 01:05:33.191416 | orchestrator | Monday 09 March 2026 01:03:19 +0000 (0:00:02.297) 0:00:48.317 ********** 2026-03-09 01:05:33.191424 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:33.191432 | orchestrator | 2026-03-09 01:05:33.191440 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-09 01:05:33.191448 | orchestrator | Monday 09 March 2026 01:03:21 +0000 (0:00:02.420) 0:00:50.737 ********** 2026-03-09 01:05:33.191457 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:05:33.191466 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:05:33.191474 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:05:33.191482 | orchestrator | 2026-03-09 01:05:33.191491 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-09 01:05:33.191499 | orchestrator | Monday 09 March 2026 01:03:22 +0000 (0:00:01.040) 0:00:51.777 ********** 2026-03-09 01:05:33.191507 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:05:33.191515 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:05:33.191523 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:05:33.191531 | orchestrator | 2026-03-09 01:05:33.191542 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-09 01:05:33.191551 | orchestrator | Monday 09 March 2026 01:03:23 +0000 (0:00:00.322) 0:00:52.100 ********** 2026-03-09 01:05:33.191559 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.191567 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:33.191575 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:33.191583 | orchestrator | 2026-03-09 01:05:33.191591 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-09 01:05:33.191598 | orchestrator | Monday 09 March 2026 01:03:23 +0000 (0:00:00.327) 0:00:52.427 ********** 2026-03-09 01:05:33.191611 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:33.191624 | orchestrator | 2026-03-09 01:05:33.191638 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-09 01:05:33.191651 | orchestrator | Monday 09 March 2026 01:03:39 +0000 (0:00:16.124) 0:01:08.552 ********** 2026-03-09 01:05:33.191665 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:33.191677 | orchestrator | 2026-03-09 01:05:33.191697 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-09 01:05:33.191711 | orchestrator | Monday 09 March 2026 01:03:51 +0000 (0:00:12.054) 0:01:20.606 ********** 2026-03-09 01:05:33.191725 | orchestrator | 2026-03-09 01:05:33.191739 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-09 01:05:33.191753 | orchestrator | Monday 09 March 2026 01:03:51 +0000 (0:00:00.128) 0:01:20.735 ********** 2026-03-09 01:05:33.191766 | orchestrator | 2026-03-09 01:05:33.191780 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-09 01:05:33.191793 | orchestrator | Monday 09 March 2026 01:03:51 +0000 (0:00:00.099) 0:01:20.834 ********** 2026-03-09 01:05:33.191807 | orchestrator | 2026-03-09 01:05:33.191821 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-09 01:05:33.191835 | orchestrator | Monday 09 March 2026 01:03:51 +0000 (0:00:00.086) 0:01:20.920 ********** 2026-03-09 01:05:33.191849 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:33.191863 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:05:33.191877 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:05:33.191888 | orchestrator | 2026-03-09 01:05:33.191896 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-09 01:05:33.191905 | orchestrator | Monday 09 March 2026 01:04:13 +0000 (0:00:22.038) 0:01:42.959 ********** 2026-03-09 01:05:33.191912 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:33.191921 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:05:33.191929 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:05:33.191944 | orchestrator | 2026-03-09 01:05:33.191952 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-09 01:05:33.191960 | orchestrator | Monday 09 March 2026 01:04:24 +0000 (0:00:10.342) 0:01:53.301 ********** 2026-03-09 01:05:33.191968 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:33.191976 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:05:33.191984 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:05:33.191992 | orchestrator | 2026-03-09 01:05:33.192000 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:05:33.192008 | orchestrator | Monday 09 March 2026 01:04:31 +0000 (0:00:07.664) 0:02:00.966 ********** 2026-03-09 01:05:33.192016 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:05:33.192024 | orchestrator | 2026-03-09 01:05:33.192032 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-09 01:05:33.192040 | orchestrator | Monday 09 March 2026 01:04:32 +0000 (0:00:00.910) 0:02:01.876 ********** 2026-03-09 01:05:33.192048 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:05:33.192056 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:05:33.192064 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:05:33.192072 | orchestrator | 2026-03-09 01:05:33.192080 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-09 01:05:33.192088 | orchestrator | Monday 09 March 2026 01:04:33 +0000 (0:00:00.957) 0:02:02.834 ********** 2026-03-09 01:05:33.192096 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:33.192105 | orchestrator | 2026-03-09 01:05:33.192113 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-09 01:05:33.192120 | orchestrator | Monday 09 March 2026 01:04:35 +0000 (0:00:01.867) 0:02:04.702 ********** 2026-03-09 01:05:33.192185 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-09 01:05:33.192199 | orchestrator | 2026-03-09 01:05:33.192214 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-09 01:05:33.192228 | orchestrator | Monday 09 March 2026 01:04:48 +0000 (0:00:13.061) 0:02:17.764 ********** 2026-03-09 01:05:33.192243 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-09 01:05:33.192257 | orchestrator | 2026-03-09 01:05:33.192271 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-09 01:05:33.192283 | orchestrator | Monday 09 March 2026 01:05:14 +0000 (0:00:26.203) 0:02:43.969 ********** 2026-03-09 01:05:33.192297 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-09 01:05:33.192311 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-09 01:05:33.192324 | orchestrator | 2026-03-09 01:05:33.192337 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-09 01:05:33.192351 | orchestrator | Monday 09 March 2026 01:05:22 +0000 (0:00:07.319) 0:02:51.288 ********** 2026-03-09 01:05:33.192365 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.192378 | orchestrator | 2026-03-09 01:05:33.192392 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-09 01:05:33.192405 | orchestrator | Monday 09 March 2026 01:05:22 +0000 (0:00:00.409) 0:02:51.697 ********** 2026-03-09 01:05:33.192419 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.192432 | orchestrator | 2026-03-09 01:05:33.192445 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-09 01:05:33.192460 | orchestrator | Monday 09 March 2026 01:05:22 +0000 (0:00:00.152) 0:02:51.850 ********** 2026-03-09 01:05:33.192483 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.192498 | orchestrator | 2026-03-09 01:05:33.192511 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-09 01:05:33.192525 | orchestrator | Monday 09 March 2026 01:05:23 +0000 (0:00:00.425) 0:02:52.275 ********** 2026-03-09 01:05:33.192533 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.192549 | orchestrator | 2026-03-09 01:05:33.192557 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-09 01:05:33.192565 | orchestrator | Monday 09 March 2026 01:05:24 +0000 (0:00:01.583) 0:02:53.859 ********** 2026-03-09 01:05:33.192573 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:05:33.192581 | orchestrator | 2026-03-09 01:05:33.192590 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:05:33.192598 | orchestrator | Monday 09 March 2026 01:05:28 +0000 (0:00:03.732) 0:02:57.591 ********** 2026-03-09 01:05:33.192606 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:33.192623 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:33.192632 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:33.192640 | orchestrator | 2026-03-09 01:05:33.192648 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:05:33.192657 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-09 01:05:33.192667 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:05:33.192675 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:05:33.192682 | orchestrator | 2026-03-09 01:05:33.192691 | orchestrator | 2026-03-09 01:05:33.192699 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:05:33.192707 | orchestrator | Monday 09 March 2026 01:05:30 +0000 (0:00:01.632) 0:02:59.224 ********** 2026-03-09 01:05:33.192715 | orchestrator | =============================================================================== 2026-03-09 01:05:33.192722 | orchestrator | service-ks-register : keystone | Creating services --------------------- 26.21s 2026-03-09 01:05:33.192730 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 22.04s 2026-03-09 01:05:33.192738 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.12s 2026-03-09 01:05:33.192747 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.06s 2026-03-09 01:05:33.192755 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.05s 2026-03-09 01:05:33.192763 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.34s 2026-03-09 01:05:33.192771 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.72s 2026-03-09 01:05:33.192779 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.66s 2026-03-09 01:05:33.192787 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.32s 2026-03-09 01:05:33.192794 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.70s 2026-03-09 01:05:33.192803 | orchestrator | keystone : Creating default user role ----------------------------------- 3.73s 2026-03-09 01:05:33.192811 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.66s 2026-03-09 01:05:33.192818 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.63s 2026-03-09 01:05:33.192826 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.13s 2026-03-09 01:05:33.192834 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.55s 2026-03-09 01:05:33.192842 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.42s 2026-03-09 01:05:33.192850 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.30s 2026-03-09 01:05:33.192859 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.08s 2026-03-09 01:05:33.192867 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.87s 2026-03-09 01:05:33.192875 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.82s 2026-03-09 01:05:33.192882 | orchestrator | 2026-03-09 01:05:33 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:05:33.192896 | orchestrator | 2026-03-09 01:05:33 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:05:33.192905 | orchestrator | 2026-03-09 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:36.238270 | orchestrator | 2026-03-09 01:05:36 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:05:36.239353 | orchestrator | 2026-03-09 01:05:36 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:05:36.240265 | orchestrator | 2026-03-09 01:05:36 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:05:36.242218 | orchestrator | 2026-03-09 01:05:36 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:05:36.243465 | orchestrator | 2026-03-09 01:05:36 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:05:36.243504 | orchestrator | 2026-03-09 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:39.323831 | orchestrator | 2026-03-09 01:05:39 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:05:39.323913 | orchestrator | 2026-03-09 01:05:39 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:05:39.323922 | orchestrator | 2026-03-09 01:05:39 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:05:39.323930 | orchestrator | 2026-03-09 01:05:39 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:05:39.323936 | orchestrator | 2026-03-09 01:05:39 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:05:39.323943 | orchestrator | 2026-03-09 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:42.360030 | orchestrator | 2026-03-09 01:05:42 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:05:42.360855 | orchestrator | 2026-03-09 01:05:42 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:05:42.360903 | orchestrator | 2026-03-09 01:05:42 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:05:42.361894 | orchestrator | 2026-03-09 01:05:42 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:05:42.363407 | orchestrator | 2026-03-09 01:05:42 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:05:42.363527 | orchestrator | 2026-03-09 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:45.427549 | orchestrator | 2026-03-09 01:05:45 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:05:45.428767 | orchestrator | 2026-03-09 01:05:45 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:05:45.430094 | orchestrator | 2026-03-09 01:05:45 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:05:45.432345 | orchestrator | 2026-03-09 01:05:45 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:05:45.432391 | orchestrator | 2026-03-09 01:05:45 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:05:45.432676 | orchestrator | 2026-03-09 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:48.491468 | orchestrator | 2026-03-09 01:05:48 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:05:48.491561 | orchestrator | 2026-03-09 01:05:48 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:05:48.491855 | orchestrator | 2026-03-09 01:05:48 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:05:48.493025 | orchestrator | 2026-03-09 01:05:48 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:05:48.494394 | orchestrator | 2026-03-09 01:05:48 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:05:48.494433 | orchestrator | 2026-03-09 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:51.562667 | orchestrator | 2026-03-09 01:05:51 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:05:51.562756 | orchestrator | 2026-03-09 01:05:51 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:05:51.562768 | orchestrator | 2026-03-09 01:05:51 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:05:51.562777 | orchestrator | 2026-03-09 01:05:51 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:05:51.562785 | orchestrator | 2026-03-09 01:05:51 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:05:51.562793 | orchestrator | 2026-03-09 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:55.523426 | orchestrator | 2026-03-09 01:05:54 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:05:55.523514 | orchestrator | 2026-03-09 01:05:54 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:05:55.523527 | orchestrator | 2026-03-09 01:05:54 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:05:55.523536 | orchestrator | 2026-03-09 01:05:54 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:05:55.523560 | orchestrator | 2026-03-09 01:05:54 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:05:55.524366 | orchestrator | 2026-03-09 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:57.671276 | orchestrator | 2026-03-09 01:05:57 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:05:57.671698 | orchestrator | 2026-03-09 01:05:57 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:05:57.672709 | orchestrator | 2026-03-09 01:05:57 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:05:57.673677 | orchestrator | 2026-03-09 01:05:57 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:05:57.674428 | orchestrator | 2026-03-09 01:05:57 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:05:57.675529 | orchestrator | 2026-03-09 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:00.786542 | orchestrator | 2026-03-09 01:06:00 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:00.787289 | orchestrator | 2026-03-09 01:06:00 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:00.788521 | orchestrator | 2026-03-09 01:06:00 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:06:00.789480 | orchestrator | 2026-03-09 01:06:00 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:00.790424 | orchestrator | 2026-03-09 01:06:00 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:00.790460 | orchestrator | 2026-03-09 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:03.845144 | orchestrator | 2026-03-09 01:06:03 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:03.845587 | orchestrator | 2026-03-09 01:06:03 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:03.846384 | orchestrator | 2026-03-09 01:06:03 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:06:03.847070 | orchestrator | 2026-03-09 01:06:03 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:03.847843 | orchestrator | 2026-03-09 01:06:03 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:03.849006 | orchestrator | 2026-03-09 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:06.892497 | orchestrator | 2026-03-09 01:06:06 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:06.892587 | orchestrator | 2026-03-09 01:06:06 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:06.892602 | orchestrator | 2026-03-09 01:06:06 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:06:06.892613 | orchestrator | 2026-03-09 01:06:06 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:06.892625 | orchestrator | 2026-03-09 01:06:06 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:06.892637 | orchestrator | 2026-03-09 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:09.911662 | orchestrator | 2026-03-09 01:06:09 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:09.911838 | orchestrator | 2026-03-09 01:06:09 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:09.912779 | orchestrator | 2026-03-09 01:06:09 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state STARTED 2026-03-09 01:06:09.913365 | orchestrator | 2026-03-09 01:06:09 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:09.914343 | orchestrator | 2026-03-09 01:06:09 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:09.914411 | orchestrator | 2026-03-09 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:12.948194 | orchestrator | 2026-03-09 01:06:12 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:12.949359 | orchestrator | 2026-03-09 01:06:12 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:12.952535 | orchestrator | 2026-03-09 01:06:12 | INFO  | Task 9062dfa7-7562-4cab-a0bd-dec83141be13 is in state SUCCESS 2026-03-09 01:06:12.954356 | orchestrator | 2026-03-09 01:06:12 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:12.956512 | orchestrator | 2026-03-09 01:06:12 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:12.957995 | orchestrator | 2026-03-09 01:06:12 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:12.958123 | orchestrator | 2026-03-09 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:16.002004 | orchestrator | 2026-03-09 01:06:15 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:16.003813 | orchestrator | 2026-03-09 01:06:16 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:16.005805 | orchestrator | 2026-03-09 01:06:16 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:16.009863 | orchestrator | 2026-03-09 01:06:16 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:16.012876 | orchestrator | 2026-03-09 01:06:16 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:16.013381 | orchestrator | 2026-03-09 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:19.070341 | orchestrator | 2026-03-09 01:06:19 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:19.070812 | orchestrator | 2026-03-09 01:06:19 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:19.071674 | orchestrator | 2026-03-09 01:06:19 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:19.072542 | orchestrator | 2026-03-09 01:06:19 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:19.073721 | orchestrator | 2026-03-09 01:06:19 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:19.073761 | orchestrator | 2026-03-09 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:22.109862 | orchestrator | 2026-03-09 01:06:22 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:22.109967 | orchestrator | 2026-03-09 01:06:22 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:22.111097 | orchestrator | 2026-03-09 01:06:22 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:22.112849 | orchestrator | 2026-03-09 01:06:22 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:22.114802 | orchestrator | 2026-03-09 01:06:22 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:22.115152 | orchestrator | 2026-03-09 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:25.164998 | orchestrator | 2026-03-09 01:06:25 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:25.167249 | orchestrator | 2026-03-09 01:06:25 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:25.168064 | orchestrator | 2026-03-09 01:06:25 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:25.168937 | orchestrator | 2026-03-09 01:06:25 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:25.169869 | orchestrator | 2026-03-09 01:06:25 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:25.172697 | orchestrator | 2026-03-09 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:28.219655 | orchestrator | 2026-03-09 01:06:28 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:28.220031 | orchestrator | 2026-03-09 01:06:28 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:28.220939 | orchestrator | 2026-03-09 01:06:28 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:28.222462 | orchestrator | 2026-03-09 01:06:28 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:28.223134 | orchestrator | 2026-03-09 01:06:28 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:28.223316 | orchestrator | 2026-03-09 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:31.260070 | orchestrator | 2026-03-09 01:06:31 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:31.260379 | orchestrator | 2026-03-09 01:06:31 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:31.261012 | orchestrator | 2026-03-09 01:06:31 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:31.261868 | orchestrator | 2026-03-09 01:06:31 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:31.262498 | orchestrator | 2026-03-09 01:06:31 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:31.262522 | orchestrator | 2026-03-09 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:34.307618 | orchestrator | 2026-03-09 01:06:34 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:34.308153 | orchestrator | 2026-03-09 01:06:34 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:34.309751 | orchestrator | 2026-03-09 01:06:34 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:34.310793 | orchestrator | 2026-03-09 01:06:34 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:34.312363 | orchestrator | 2026-03-09 01:06:34 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:34.312404 | orchestrator | 2026-03-09 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:37.351734 | orchestrator | 2026-03-09 01:06:37 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:37.351853 | orchestrator | 2026-03-09 01:06:37 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:37.352832 | orchestrator | 2026-03-09 01:06:37 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:37.353673 | orchestrator | 2026-03-09 01:06:37 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:37.354338 | orchestrator | 2026-03-09 01:06:37 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:37.354617 | orchestrator | 2026-03-09 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:40.385692 | orchestrator | 2026-03-09 01:06:40 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:40.387469 | orchestrator | 2026-03-09 01:06:40 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:40.388459 | orchestrator | 2026-03-09 01:06:40 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:40.389499 | orchestrator | 2026-03-09 01:06:40 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:40.390367 | orchestrator | 2026-03-09 01:06:40 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:40.390400 | orchestrator | 2026-03-09 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:43.432585 | orchestrator | 2026-03-09 01:06:43 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:43.433533 | orchestrator | 2026-03-09 01:06:43 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:43.434543 | orchestrator | 2026-03-09 01:06:43 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:43.435682 | orchestrator | 2026-03-09 01:06:43 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:43.436894 | orchestrator | 2026-03-09 01:06:43 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:43.436950 | orchestrator | 2026-03-09 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:46.471268 | orchestrator | 2026-03-09 01:06:46 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:46.471879 | orchestrator | 2026-03-09 01:06:46 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:46.473501 | orchestrator | 2026-03-09 01:06:46 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:46.474687 | orchestrator | 2026-03-09 01:06:46 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:46.475376 | orchestrator | 2026-03-09 01:06:46 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:46.476477 | orchestrator | 2026-03-09 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:49.521483 | orchestrator | 2026-03-09 01:06:49 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:49.521583 | orchestrator | 2026-03-09 01:06:49 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:49.522931 | orchestrator | 2026-03-09 01:06:49 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:49.524175 | orchestrator | 2026-03-09 01:06:49 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state STARTED 2026-03-09 01:06:49.525566 | orchestrator | 2026-03-09 01:06:49 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:49.525625 | orchestrator | 2026-03-09 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:52.566204 | orchestrator | 2026-03-09 01:06:52 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:52.567075 | orchestrator | 2026-03-09 01:06:52 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:52.568609 | orchestrator | 2026-03-09 01:06:52 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:52.569277 | orchestrator | 2026-03-09 01:06:52 | INFO  | Task 50ec7ab6-17a5-4646-b594-c017d2533107 is in state SUCCESS 2026-03-09 01:06:52.569573 | orchestrator | 2026-03-09 01:06:52.569596 | orchestrator | 2026-03-09 01:06:52.569603 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:06:52.569610 | orchestrator | 2026-03-09 01:06:52.569616 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:06:52.569622 | orchestrator | Monday 09 March 2026 01:05:27 +0000 (0:00:00.354) 0:00:00.354 ********** 2026-03-09 01:06:52.569629 | orchestrator | ok: [testbed-manager] 2026-03-09 01:06:52.569636 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:06:52.569643 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:06:52.569649 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:06:52.569655 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:06:52.569661 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:06:52.569666 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:06:52.569673 | orchestrator | 2026-03-09 01:06:52.569679 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:06:52.569685 | orchestrator | Monday 09 March 2026 01:05:28 +0000 (0:00:01.073) 0:00:01.428 ********** 2026-03-09 01:06:52.569692 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-09 01:06:52.569699 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-09 01:06:52.569706 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-09 01:06:52.569713 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-09 01:06:52.569719 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-09 01:06:52.569725 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-09 01:06:52.569731 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-09 01:06:52.569762 | orchestrator | 2026-03-09 01:06:52.569769 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-09 01:06:52.569775 | orchestrator | 2026-03-09 01:06:52.569781 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-09 01:06:52.569787 | orchestrator | Monday 09 March 2026 01:05:30 +0000 (0:00:02.079) 0:00:03.508 ********** 2026-03-09 01:06:52.569794 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:06:52.569802 | orchestrator | 2026-03-09 01:06:52.569808 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-09 01:06:52.569815 | orchestrator | Monday 09 March 2026 01:05:32 +0000 (0:00:02.396) 0:00:05.905 ********** 2026-03-09 01:06:52.569821 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-09 01:06:52.569827 | orchestrator | 2026-03-09 01:06:52.569833 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-09 01:06:52.569839 | orchestrator | Monday 09 March 2026 01:05:37 +0000 (0:00:04.453) 0:00:10.358 ********** 2026-03-09 01:06:52.569846 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-09 01:06:52.569855 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-09 01:06:52.569861 | orchestrator | 2026-03-09 01:06:52.569867 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-09 01:06:52.569872 | orchestrator | Monday 09 March 2026 01:05:44 +0000 (0:00:07.280) 0:00:17.638 ********** 2026-03-09 01:06:52.569878 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-09 01:06:52.569884 | orchestrator | 2026-03-09 01:06:52.569890 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-09 01:06:52.569895 | orchestrator | Monday 09 March 2026 01:05:48 +0000 (0:00:03.857) 0:00:21.496 ********** 2026-03-09 01:06:52.569902 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:06:52.569908 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-09 01:06:52.569914 | orchestrator | 2026-03-09 01:06:52.569920 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-09 01:06:52.569926 | orchestrator | Monday 09 March 2026 01:05:53 +0000 (0:00:05.036) 0:00:26.533 ********** 2026-03-09 01:06:52.569933 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-09 01:06:52.569939 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-09 01:06:52.569945 | orchestrator | 2026-03-09 01:06:52.569951 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-09 01:06:52.569958 | orchestrator | Monday 09 March 2026 01:06:03 +0000 (0:00:09.966) 0:00:36.499 ********** 2026-03-09 01:06:52.569977 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-09 01:06:52.569984 | orchestrator | 2026-03-09 01:06:52.569990 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:06:52.569996 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:06:52.570004 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:06:52.570010 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:06:52.570064 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:06:52.570069 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:06:52.570092 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:06:52.570097 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:06:52.570101 | orchestrator | 2026-03-09 01:06:52.570130 | orchestrator | 2026-03-09 01:06:52.570137 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:06:52.570147 | orchestrator | Monday 09 March 2026 01:06:09 +0000 (0:00:06.199) 0:00:42.699 ********** 2026-03-09 01:06:52.570155 | orchestrator | =============================================================================== 2026-03-09 01:06:52.570161 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 9.97s 2026-03-09 01:06:52.570166 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.28s 2026-03-09 01:06:52.570172 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.20s 2026-03-09 01:06:52.570178 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 5.04s 2026-03-09 01:06:52.570184 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.45s 2026-03-09 01:06:52.570189 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.86s 2026-03-09 01:06:52.570195 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.40s 2026-03-09 01:06:52.570202 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.08s 2026-03-09 01:06:52.570208 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.07s 2026-03-09 01:06:52.570214 | orchestrator | 2026-03-09 01:06:52.570221 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-09 01:06:52.570267 | orchestrator | 2.16.14 2026-03-09 01:06:52.570275 | orchestrator | 2026-03-09 01:06:52.570281 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-09 01:06:52.570287 | orchestrator | 2026-03-09 01:06:52.570294 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-09 01:06:52.570300 | orchestrator | Monday 09 March 2026 01:05:17 +0000 (0:00:00.329) 0:00:00.329 ********** 2026-03-09 01:06:52.570306 | orchestrator | changed: [testbed-manager] 2026-03-09 01:06:52.570313 | orchestrator | 2026-03-09 01:06:52.570319 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-09 01:06:52.570325 | orchestrator | Monday 09 March 2026 01:05:19 +0000 (0:00:01.958) 0:00:02.288 ********** 2026-03-09 01:06:52.570332 | orchestrator | changed: [testbed-manager] 2026-03-09 01:06:52.570339 | orchestrator | 2026-03-09 01:06:52.570345 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-09 01:06:52.570351 | orchestrator | Monday 09 March 2026 01:05:20 +0000 (0:00:01.183) 0:00:03.471 ********** 2026-03-09 01:06:52.570357 | orchestrator | changed: [testbed-manager] 2026-03-09 01:06:52.570363 | orchestrator | 2026-03-09 01:06:52.570369 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-09 01:06:52.570375 | orchestrator | Monday 09 March 2026 01:05:21 +0000 (0:00:01.225) 0:00:04.696 ********** 2026-03-09 01:06:52.570381 | orchestrator | changed: [testbed-manager] 2026-03-09 01:06:52.570387 | orchestrator | 2026-03-09 01:06:52.570393 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-09 01:06:52.570399 | orchestrator | Monday 09 March 2026 01:05:23 +0000 (0:00:01.821) 0:00:06.517 ********** 2026-03-09 01:06:52.570406 | orchestrator | changed: [testbed-manager] 2026-03-09 01:06:52.570413 | orchestrator | 2026-03-09 01:06:52.570419 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-09 01:06:52.570425 | orchestrator | Monday 09 March 2026 01:05:25 +0000 (0:00:02.025) 0:00:08.543 ********** 2026-03-09 01:06:52.570432 | orchestrator | changed: [testbed-manager] 2026-03-09 01:06:52.570439 | orchestrator | 2026-03-09 01:06:52.570445 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-09 01:06:52.570461 | orchestrator | Monday 09 March 2026 01:05:26 +0000 (0:00:01.283) 0:00:09.827 ********** 2026-03-09 01:06:52.570466 | orchestrator | changed: [testbed-manager] 2026-03-09 01:06:52.570471 | orchestrator | 2026-03-09 01:06:52.570476 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-09 01:06:52.570482 | orchestrator | Monday 09 March 2026 01:05:28 +0000 (0:00:02.161) 0:00:11.988 ********** 2026-03-09 01:06:52.570488 | orchestrator | changed: [testbed-manager] 2026-03-09 01:06:52.570495 | orchestrator | 2026-03-09 01:06:52.570501 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-09 01:06:52.570507 | orchestrator | Monday 09 March 2026 01:05:30 +0000 (0:00:01.409) 0:00:13.398 ********** 2026-03-09 01:06:52.570513 | orchestrator | changed: [testbed-manager] 2026-03-09 01:06:52.570520 | orchestrator | 2026-03-09 01:06:52.570531 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-09 01:06:52.570640 | orchestrator | Monday 09 March 2026 01:06:25 +0000 (0:00:55.644) 0:01:09.043 ********** 2026-03-09 01:06:52.570699 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:06:52.570708 | orchestrator | 2026-03-09 01:06:52.570715 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-09 01:06:52.570721 | orchestrator | 2026-03-09 01:06:52.570728 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-09 01:06:52.570734 | orchestrator | Monday 09 March 2026 01:06:26 +0000 (0:00:00.253) 0:01:09.296 ********** 2026-03-09 01:06:52.570740 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:06:52.570747 | orchestrator | 2026-03-09 01:06:52.570753 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-09 01:06:52.570759 | orchestrator | 2026-03-09 01:06:52.570766 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-09 01:06:52.570769 | orchestrator | Monday 09 March 2026 01:06:37 +0000 (0:00:11.676) 0:01:20.973 ********** 2026-03-09 01:06:52.570774 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:06:52.570778 | orchestrator | 2026-03-09 01:06:52.570781 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-09 01:06:52.570785 | orchestrator | 2026-03-09 01:06:52.570789 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-09 01:06:52.570803 | orchestrator | Monday 09 March 2026 01:06:39 +0000 (0:00:01.343) 0:01:22.316 ********** 2026-03-09 01:06:52.570810 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:06:52.570816 | orchestrator | 2026-03-09 01:06:52.570822 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:06:52.570829 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 01:06:52.570837 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:06:52.570843 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:06:52.570850 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:06:52.570856 | orchestrator | 2026-03-09 01:06:52.570862 | orchestrator | 2026-03-09 01:06:52.570869 | orchestrator | 2026-03-09 01:06:52.570875 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:06:52.570882 | orchestrator | Monday 09 March 2026 01:06:50 +0000 (0:00:11.236) 0:01:33.553 ********** 2026-03-09 01:06:52.570888 | orchestrator | =============================================================================== 2026-03-09 01:06:52.570894 | orchestrator | Create admin user ------------------------------------------------------ 55.64s 2026-03-09 01:06:52.570899 | orchestrator | Restart ceph manager service ------------------------------------------- 24.26s 2026-03-09 01:06:52.570905 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.16s 2026-03-09 01:06:52.570919 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 2.03s 2026-03-09 01:06:52.570927 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.96s 2026-03-09 01:06:52.570933 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.82s 2026-03-09 01:06:52.570940 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.41s 2026-03-09 01:06:52.570946 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.28s 2026-03-09 01:06:52.570952 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.23s 2026-03-09 01:06:52.570958 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.18s 2026-03-09 01:06:52.570965 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.25s 2026-03-09 01:06:52.570971 | orchestrator | 2026-03-09 01:06:52 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:52.570978 | orchestrator | 2026-03-09 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:55.609129 | orchestrator | 2026-03-09 01:06:55 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:55.610087 | orchestrator | 2026-03-09 01:06:55 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:55.611880 | orchestrator | 2026-03-09 01:06:55 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:55.612648 | orchestrator | 2026-03-09 01:06:55 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:55.612715 | orchestrator | 2026-03-09 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:58.650577 | orchestrator | 2026-03-09 01:06:58 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:06:58.651108 | orchestrator | 2026-03-09 01:06:58 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:06:58.651974 | orchestrator | 2026-03-09 01:06:58 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:06:58.653017 | orchestrator | 2026-03-09 01:06:58 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:06:58.653038 | orchestrator | 2026-03-09 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:01.685169 | orchestrator | 2026-03-09 01:07:01 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:01.686075 | orchestrator | 2026-03-09 01:07:01 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:01.686964 | orchestrator | 2026-03-09 01:07:01 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:01.689178 | orchestrator | 2026-03-09 01:07:01 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:01.689262 | orchestrator | 2026-03-09 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:04.741563 | orchestrator | 2026-03-09 01:07:04 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:04.742626 | orchestrator | 2026-03-09 01:07:04 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:04.744528 | orchestrator | 2026-03-09 01:07:04 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:04.745568 | orchestrator | 2026-03-09 01:07:04 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:04.745625 | orchestrator | 2026-03-09 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:07.784081 | orchestrator | 2026-03-09 01:07:07 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:07.784330 | orchestrator | 2026-03-09 01:07:07 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:07.785137 | orchestrator | 2026-03-09 01:07:07 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:07.786473 | orchestrator | 2026-03-09 01:07:07 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:07.786563 | orchestrator | 2026-03-09 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:10.828745 | orchestrator | 2026-03-09 01:07:10 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:10.829166 | orchestrator | 2026-03-09 01:07:10 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:10.830186 | orchestrator | 2026-03-09 01:07:10 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:10.830989 | orchestrator | 2026-03-09 01:07:10 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:10.831018 | orchestrator | 2026-03-09 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:13.858708 | orchestrator | 2026-03-09 01:07:13 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:13.858846 | orchestrator | 2026-03-09 01:07:13 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:13.860089 | orchestrator | 2026-03-09 01:07:13 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:13.861200 | orchestrator | 2026-03-09 01:07:13 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:13.861234 | orchestrator | 2026-03-09 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:16.947176 | orchestrator | 2026-03-09 01:07:16 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:16.947365 | orchestrator | 2026-03-09 01:07:16 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:16.947381 | orchestrator | 2026-03-09 01:07:16 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:16.947393 | orchestrator | 2026-03-09 01:07:16 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:16.947404 | orchestrator | 2026-03-09 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:19.935688 | orchestrator | 2026-03-09 01:07:19 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:19.935773 | orchestrator | 2026-03-09 01:07:19 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:19.936754 | orchestrator | 2026-03-09 01:07:19 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:19.937497 | orchestrator | 2026-03-09 01:07:19 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:19.937557 | orchestrator | 2026-03-09 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:23.046708 | orchestrator | 2026-03-09 01:07:22 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:23.046785 | orchestrator | 2026-03-09 01:07:22 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:23.046793 | orchestrator | 2026-03-09 01:07:22 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:23.046799 | orchestrator | 2026-03-09 01:07:22 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:23.046829 | orchestrator | 2026-03-09 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:26.029343 | orchestrator | 2026-03-09 01:07:26 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:26.029433 | orchestrator | 2026-03-09 01:07:26 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:26.029794 | orchestrator | 2026-03-09 01:07:26 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:26.030437 | orchestrator | 2026-03-09 01:07:26 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:26.030615 | orchestrator | 2026-03-09 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:29.068228 | orchestrator | 2026-03-09 01:07:29 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:29.068671 | orchestrator | 2026-03-09 01:07:29 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:29.069669 | orchestrator | 2026-03-09 01:07:29 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:29.071077 | orchestrator | 2026-03-09 01:07:29 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:29.071141 | orchestrator | 2026-03-09 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:32.121734 | orchestrator | 2026-03-09 01:07:32 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:32.121843 | orchestrator | 2026-03-09 01:07:32 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:32.121855 | orchestrator | 2026-03-09 01:07:32 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:32.124339 | orchestrator | 2026-03-09 01:07:32 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:32.124400 | orchestrator | 2026-03-09 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:35.229571 | orchestrator | 2026-03-09 01:07:35 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:35.230755 | orchestrator | 2026-03-09 01:07:35 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:35.232960 | orchestrator | 2026-03-09 01:07:35 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:35.234731 | orchestrator | 2026-03-09 01:07:35 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:35.234787 | orchestrator | 2026-03-09 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:38.268938 | orchestrator | 2026-03-09 01:07:38 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:38.270407 | orchestrator | 2026-03-09 01:07:38 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:38.270571 | orchestrator | 2026-03-09 01:07:38 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:38.272452 | orchestrator | 2026-03-09 01:07:38 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:38.272523 | orchestrator | 2026-03-09 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:41.317075 | orchestrator | 2026-03-09 01:07:41 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:41.317523 | orchestrator | 2026-03-09 01:07:41 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:41.318685 | orchestrator | 2026-03-09 01:07:41 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:41.321655 | orchestrator | 2026-03-09 01:07:41 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:41.321771 | orchestrator | 2026-03-09 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:44.369455 | orchestrator | 2026-03-09 01:07:44 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:44.371928 | orchestrator | 2026-03-09 01:07:44 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:44.375075 | orchestrator | 2026-03-09 01:07:44 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:44.377629 | orchestrator | 2026-03-09 01:07:44 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:44.377682 | orchestrator | 2026-03-09 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:47.435048 | orchestrator | 2026-03-09 01:07:47 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:47.436743 | orchestrator | 2026-03-09 01:07:47 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:47.437830 | orchestrator | 2026-03-09 01:07:47 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:47.439698 | orchestrator | 2026-03-09 01:07:47 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:47.439742 | orchestrator | 2026-03-09 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:50.484396 | orchestrator | 2026-03-09 01:07:50 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:50.485073 | orchestrator | 2026-03-09 01:07:50 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:50.485814 | orchestrator | 2026-03-09 01:07:50 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:50.487216 | orchestrator | 2026-03-09 01:07:50 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:50.487246 | orchestrator | 2026-03-09 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:53.534426 | orchestrator | 2026-03-09 01:07:53 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:53.535926 | orchestrator | 2026-03-09 01:07:53 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:53.537406 | orchestrator | 2026-03-09 01:07:53 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:53.539654 | orchestrator | 2026-03-09 01:07:53 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:53.539701 | orchestrator | 2026-03-09 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:56.639543 | orchestrator | 2026-03-09 01:07:56 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:56.678990 | orchestrator | 2026-03-09 01:07:56 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:56.680755 | orchestrator | 2026-03-09 01:07:56 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:56.682481 | orchestrator | 2026-03-09 01:07:56 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:56.682575 | orchestrator | 2026-03-09 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:59.733026 | orchestrator | 2026-03-09 01:07:59 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:07:59.733433 | orchestrator | 2026-03-09 01:07:59 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:07:59.734197 | orchestrator | 2026-03-09 01:07:59 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:07:59.737553 | orchestrator | 2026-03-09 01:07:59 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:07:59.737623 | orchestrator | 2026-03-09 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:02.819698 | orchestrator | 2026-03-09 01:08:02 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:02.820330 | orchestrator | 2026-03-09 01:08:02 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:02.821266 | orchestrator | 2026-03-09 01:08:02 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:02.822381 | orchestrator | 2026-03-09 01:08:02 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:02.822459 | orchestrator | 2026-03-09 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:05.870280 | orchestrator | 2026-03-09 01:08:05 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:05.871453 | orchestrator | 2026-03-09 01:08:05 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:05.873458 | orchestrator | 2026-03-09 01:08:05 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:05.874798 | orchestrator | 2026-03-09 01:08:05 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:05.874983 | orchestrator | 2026-03-09 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:08.921392 | orchestrator | 2026-03-09 01:08:08 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:08.922726 | orchestrator | 2026-03-09 01:08:08 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:08.926639 | orchestrator | 2026-03-09 01:08:08 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:08.928766 | orchestrator | 2026-03-09 01:08:08 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:08.929528 | orchestrator | 2026-03-09 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:11.973518 | orchestrator | 2026-03-09 01:08:11 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:11.974656 | orchestrator | 2026-03-09 01:08:11 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:11.977234 | orchestrator | 2026-03-09 01:08:11 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:11.978681 | orchestrator | 2026-03-09 01:08:11 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:11.978712 | orchestrator | 2026-03-09 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:15.026860 | orchestrator | 2026-03-09 01:08:15 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:15.026962 | orchestrator | 2026-03-09 01:08:15 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:15.028428 | orchestrator | 2026-03-09 01:08:15 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:15.029833 | orchestrator | 2026-03-09 01:08:15 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:15.029881 | orchestrator | 2026-03-09 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:18.074235 | orchestrator | 2026-03-09 01:08:18 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:18.075581 | orchestrator | 2026-03-09 01:08:18 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:18.076795 | orchestrator | 2026-03-09 01:08:18 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:18.078426 | orchestrator | 2026-03-09 01:08:18 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:18.078475 | orchestrator | 2026-03-09 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:21.134794 | orchestrator | 2026-03-09 01:08:21 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:21.136838 | orchestrator | 2026-03-09 01:08:21 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:21.138938 | orchestrator | 2026-03-09 01:08:21 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:21.140930 | orchestrator | 2026-03-09 01:08:21 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:21.140974 | orchestrator | 2026-03-09 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:24.192779 | orchestrator | 2026-03-09 01:08:24 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:24.194437 | orchestrator | 2026-03-09 01:08:24 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:24.194837 | orchestrator | 2026-03-09 01:08:24 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:24.198786 | orchestrator | 2026-03-09 01:08:24 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:24.198860 | orchestrator | 2026-03-09 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:27.269828 | orchestrator | 2026-03-09 01:08:27 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:27.270528 | orchestrator | 2026-03-09 01:08:27 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:27.271779 | orchestrator | 2026-03-09 01:08:27 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:27.272536 | orchestrator | 2026-03-09 01:08:27 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:27.272569 | orchestrator | 2026-03-09 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:30.308730 | orchestrator | 2026-03-09 01:08:30 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:30.308833 | orchestrator | 2026-03-09 01:08:30 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:30.311254 | orchestrator | 2026-03-09 01:08:30 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:30.311496 | orchestrator | 2026-03-09 01:08:30 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:30.311520 | orchestrator | 2026-03-09 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:33.380669 | orchestrator | 2026-03-09 01:08:33 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:33.383178 | orchestrator | 2026-03-09 01:08:33 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:33.383950 | orchestrator | 2026-03-09 01:08:33 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:33.385062 | orchestrator | 2026-03-09 01:08:33 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:33.385213 | orchestrator | 2026-03-09 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:36.468533 | orchestrator | 2026-03-09 01:08:36 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:36.468630 | orchestrator | 2026-03-09 01:08:36 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:36.468644 | orchestrator | 2026-03-09 01:08:36 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:36.468655 | orchestrator | 2026-03-09 01:08:36 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:36.468666 | orchestrator | 2026-03-09 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:39.595975 | orchestrator | 2026-03-09 01:08:39 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:39.596290 | orchestrator | 2026-03-09 01:08:39 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:39.598106 | orchestrator | 2026-03-09 01:08:39 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:39.598863 | orchestrator | 2026-03-09 01:08:39 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:39.598892 | orchestrator | 2026-03-09 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:42.645374 | orchestrator | 2026-03-09 01:08:42 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:42.645481 | orchestrator | 2026-03-09 01:08:42 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:42.647649 | orchestrator | 2026-03-09 01:08:42 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:42.647926 | orchestrator | 2026-03-09 01:08:42 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:42.648478 | orchestrator | 2026-03-09 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:45.704906 | orchestrator | 2026-03-09 01:08:45 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:45.706616 | orchestrator | 2026-03-09 01:08:45 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:45.708825 | orchestrator | 2026-03-09 01:08:45 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:45.713115 | orchestrator | 2026-03-09 01:08:45 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:45.713197 | orchestrator | 2026-03-09 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:48.766497 | orchestrator | 2026-03-09 01:08:48 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:48.766612 | orchestrator | 2026-03-09 01:08:48 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:48.767465 | orchestrator | 2026-03-09 01:08:48 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:48.767974 | orchestrator | 2026-03-09 01:08:48 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:48.768003 | orchestrator | 2026-03-09 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:51.804001 | orchestrator | 2026-03-09 01:08:51 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:51.805041 | orchestrator | 2026-03-09 01:08:51 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:51.806121 | orchestrator | 2026-03-09 01:08:51 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:51.807523 | orchestrator | 2026-03-09 01:08:51 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:51.807586 | orchestrator | 2026-03-09 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:54.879146 | orchestrator | 2026-03-09 01:08:54 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:54.883917 | orchestrator | 2026-03-09 01:08:54 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:54.885202 | orchestrator | 2026-03-09 01:08:54 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:54.886169 | orchestrator | 2026-03-09 01:08:54 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:54.886308 | orchestrator | 2026-03-09 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:57.933105 | orchestrator | 2026-03-09 01:08:57 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:08:57.935743 | orchestrator | 2026-03-09 01:08:57 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:08:57.937762 | orchestrator | 2026-03-09 01:08:57 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:08:57.940482 | orchestrator | 2026-03-09 01:08:57 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:08:57.940543 | orchestrator | 2026-03-09 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:01.013770 | orchestrator | 2026-03-09 01:09:01 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:09:01.015473 | orchestrator | 2026-03-09 01:09:01 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:09:01.017248 | orchestrator | 2026-03-09 01:09:01 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:01.017295 | orchestrator | 2026-03-09 01:09:01 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:09:01.017306 | orchestrator | 2026-03-09 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:04.111216 | orchestrator | 2026-03-09 01:09:04 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state STARTED 2026-03-09 01:09:04.111288 | orchestrator | 2026-03-09 01:09:04 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:09:04.114775 | orchestrator | 2026-03-09 01:09:04 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:04.116651 | orchestrator | 2026-03-09 01:09:04 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:09:04.116711 | orchestrator | 2026-03-09 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:07.159560 | orchestrator | 2026-03-09 01:09:07 | INFO  | Task c35f006a-24da-46ed-b61e-c47fe4130771 is in state SUCCESS 2026-03-09 01:09:07.160336 | orchestrator | 2026-03-09 01:09:07.160356 | orchestrator | 2026-03-09 01:09:07.160362 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:09:07.160377 | orchestrator | 2026-03-09 01:09:07.160382 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:09:07.160387 | orchestrator | Monday 09 March 2026 01:05:37 +0000 (0:00:00.276) 0:00:00.276 ********** 2026-03-09 01:09:07.160392 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:09:07.160414 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:09:07.160419 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:09:07.160424 | orchestrator | 2026-03-09 01:09:07.160428 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:09:07.160433 | orchestrator | Monday 09 March 2026 01:05:38 +0000 (0:00:00.480) 0:00:00.757 ********** 2026-03-09 01:09:07.160438 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-09 01:09:07.160443 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-09 01:09:07.160467 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-09 01:09:07.160472 | orchestrator | 2026-03-09 01:09:07.160483 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-09 01:09:07.160487 | orchestrator | 2026-03-09 01:09:07.160493 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-09 01:09:07.160500 | orchestrator | Monday 09 March 2026 01:05:39 +0000 (0:00:00.890) 0:00:01.648 ********** 2026-03-09 01:09:07.160505 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:09:07.160511 | orchestrator | 2026-03-09 01:09:07.160516 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-09 01:09:07.160521 | orchestrator | Monday 09 March 2026 01:05:39 +0000 (0:00:00.911) 0:00:02.559 ********** 2026-03-09 01:09:07.160541 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-09 01:09:07.160546 | orchestrator | 2026-03-09 01:09:07.160550 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-09 01:09:07.160553 | orchestrator | Monday 09 March 2026 01:05:43 +0000 (0:00:04.058) 0:00:06.618 ********** 2026-03-09 01:09:07.160557 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-09 01:09:07.160561 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-09 01:09:07.160566 | orchestrator | 2026-03-09 01:09:07.160570 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-09 01:09:07.160574 | orchestrator | Monday 09 March 2026 01:05:51 +0000 (0:00:07.685) 0:00:14.303 ********** 2026-03-09 01:09:07.160577 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:09:07.160582 | orchestrator | 2026-03-09 01:09:07.160585 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-09 01:09:07.160589 | orchestrator | Monday 09 March 2026 01:05:55 +0000 (0:00:04.294) 0:00:18.598 ********** 2026-03-09 01:09:07.160593 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:09:07.160597 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-09 01:09:07.160601 | orchestrator | 2026-03-09 01:09:07.160604 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-09 01:09:07.160608 | orchestrator | Monday 09 March 2026 01:06:00 +0000 (0:00:04.775) 0:00:23.373 ********** 2026-03-09 01:09:07.160612 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:09:07.160616 | orchestrator | 2026-03-09 01:09:07.160620 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-09 01:09:07.160623 | orchestrator | Monday 09 March 2026 01:06:04 +0000 (0:00:03.860) 0:00:27.234 ********** 2026-03-09 01:09:07.160627 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-09 01:09:07.160631 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-09 01:09:07.160635 | orchestrator | 2026-03-09 01:09:07.160638 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-09 01:09:07.160642 | orchestrator | Monday 09 March 2026 01:06:13 +0000 (0:00:08.715) 0:00:35.949 ********** 2026-03-09 01:09:07.160647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:09:07.160665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:09:07.160672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:09:07.160677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.160681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.160685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.160692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.160699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.160707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.160715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.160721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.160731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.160737 | orchestrator | 2026-03-09 01:09:07.160743 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-09 01:09:07.160749 | orchestrator | Monday 09 March 2026 01:06:16 +0000 (0:00:02.906) 0:00:38.856 ********** 2026-03-09 01:09:07.160755 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:07.160761 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:07.160767 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:07.160774 | orchestrator | 2026-03-09 01:09:07.160780 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-09 01:09:07.160787 | orchestrator | Monday 09 March 2026 01:06:17 +0000 (0:00:00.823) 0:00:39.679 ********** 2026-03-09 01:09:07.160793 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:09:07.160799 | orchestrator | 2026-03-09 01:09:07.160810 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-09 01:09:07.160816 | orchestrator | Monday 09 March 2026 01:06:18 +0000 (0:00:01.070) 0:00:40.750 ********** 2026-03-09 01:09:07.160823 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-09 01:09:07.160828 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-09 01:09:07.160832 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-09 01:09:07.160836 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-09 01:09:07.160839 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-09 01:09:07.160843 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-09 01:09:07.160847 | orchestrator | 2026-03-09 01:09:07.160851 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-09 01:09:07.160855 | orchestrator | Monday 09 March 2026 01:06:20 +0000 (0:00:02.576) 0:00:43.326 ********** 2026-03-09 01:09:07.160862 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-09 01:09:07.160867 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-09 01:09:07.160875 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-09 01:09:07.160879 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-09 01:09:07.160888 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-09 01:09:07.160893 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-09 01:09:07.160897 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-09 01:09:07.160904 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-09 01:09:07.160908 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-09 01:09:07.160948 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-09 01:09:07.160955 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-09 01:09:07.160959 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-09 01:09:07.160966 | orchestrator | 2026-03-09 01:09:07.160970 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-09 01:09:07.160974 | orchestrator | Monday 09 March 2026 01:06:25 +0000 (0:00:04.816) 0:00:48.142 ********** 2026-03-09 01:09:07.160978 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:09:07.160982 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:09:07.160986 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:09:07.160990 | orchestrator | 2026-03-09 01:09:07.160993 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-09 01:09:07.160998 | orchestrator | Monday 09 March 2026 01:06:28 +0000 (0:00:03.433) 0:00:51.575 ********** 2026-03-09 01:09:07.161005 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-09 01:09:07.161010 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-09 01:09:07.161014 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-09 01:09:07.161018 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:09:07.161022 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:09:07.161025 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:09:07.161029 | orchestrator | 2026-03-09 01:09:07.161033 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-09 01:09:07.161052 | orchestrator | Monday 09 March 2026 01:06:33 +0000 (0:00:04.817) 0:00:56.392 ********** 2026-03-09 01:09:07.161057 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-09 01:09:07.161061 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-09 01:09:07.161065 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-09 01:09:07.161069 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-09 01:09:07.161073 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-09 01:09:07.161076 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-09 01:09:07.161080 | orchestrator | 2026-03-09 01:09:07.161084 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-09 01:09:07.161088 | orchestrator | Monday 09 March 2026 01:06:35 +0000 (0:00:01.558) 0:00:57.951 ********** 2026-03-09 01:09:07.161091 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:07.161095 | orchestrator | 2026-03-09 01:09:07.161099 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-09 01:09:07.161103 | orchestrator | Monday 09 March 2026 01:06:35 +0000 (0:00:00.401) 0:00:58.352 ********** 2026-03-09 01:09:07.161107 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:07.161111 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:07.161117 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:07.161121 | orchestrator | 2026-03-09 01:09:07.161125 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-09 01:09:07.161129 | orchestrator | Monday 09 March 2026 01:06:36 +0000 (0:00:00.428) 0:00:58.781 ********** 2026-03-09 01:09:07.161132 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:09:07.161136 | orchestrator | 2026-03-09 01:09:07.161140 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-09 01:09:07.161144 | orchestrator | Monday 09 March 2026 01:06:37 +0000 (0:00:01.298) 0:01:00.079 ********** 2026-03-09 01:09:07.161150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:09:07.161157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:09:07.161161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:09:07.161165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161480 | orchestrator | 2026-03-09 01:09:07.161484 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-09 01:09:07.161488 | orchestrator | Monday 09 March 2026 01:06:42 +0000 (0:00:05.261) 0:01:05.341 ********** 2026-03-09 01:09:07.161492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:09:07.161496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:09:07.161508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161529 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:07.161534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161538 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:07.161542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:09:07.161552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161565 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:07.161568 | orchestrator | 2026-03-09 01:09:07.161572 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-09 01:09:07.161576 | orchestrator | Monday 09 March 2026 01:06:44 +0000 (0:00:01.399) 0:01:06.741 ********** 2026-03-09 01:09:07.161598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:09:07.161603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161623 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:07.161627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:09:07.161631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161647 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:07.161654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:09:07.161658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161670 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:07.161674 | orchestrator | 2026-03-09 01:09:07.161678 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-09 01:09:07.161681 | orchestrator | Monday 09 March 2026 01:06:46 +0000 (0:00:02.051) 0:01:08.793 ********** 2026-03-09 01:09:07.161688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:09:07.161696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:09:07.161701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:09:07.161705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161753 | orchestrator | 2026-03-09 01:09:07.161757 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-09 01:09:07.161760 | orchestrator | Monday 09 March 2026 01:06:51 +0000 (0:00:05.320) 0:01:14.114 ********** 2026-03-09 01:09:07.161764 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-09 01:09:07.161770 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-09 01:09:07.161786 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-09 01:09:07.161790 | orchestrator | 2026-03-09 01:09:07.161794 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-09 01:09:07.161798 | orchestrator | Monday 09 March 2026 01:06:53 +0000 (0:00:02.068) 0:01:16.182 ********** 2026-03-09 01:09:07.161804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:09:07.161814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:09:07.161819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:09:07.161826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.161875 | orchestrator | 2026-03-09 01:09:07.161879 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-09 01:09:07.161883 | orchestrator | Monday 09 March 2026 01:07:13 +0000 (0:00:19.960) 0:01:36.143 ********** 2026-03-09 01:09:07.161886 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:07.161890 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:07.161894 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:07.161898 | orchestrator | 2026-03-09 01:09:07.161902 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-09 01:09:07.161906 | orchestrator | Monday 09 March 2026 01:07:17 +0000 (0:00:03.896) 0:01:40.039 ********** 2026-03-09 01:09:07.161910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:09:07.161916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161930 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:07.161936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:09:07.161940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161956 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:07.161962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:09:07.161969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:09:07.161983 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:07.161987 | orchestrator | 2026-03-09 01:09:07.161991 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-09 01:09:07.161995 | orchestrator | Monday 09 March 2026 01:07:19 +0000 (0:00:01.776) 0:01:41.816 ********** 2026-03-09 01:09:07.161999 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:07.162003 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:07.162006 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:07.162010 | orchestrator | 2026-03-09 01:09:07.162089 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-09 01:09:07.162097 | orchestrator | Monday 09 March 2026 01:07:19 +0000 (0:00:00.632) 0:01:42.448 ********** 2026-03-09 01:09:07.162101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:09:07.162109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:09:07.162117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:09:07.162124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.162131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.162137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.162145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.162151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.162157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.162166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.162170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.162174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:09:07.162178 | orchestrator | 2026-03-09 01:09:07.162182 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-09 01:09:07.162186 | orchestrator | Monday 09 March 2026 01:07:24 +0000 (0:00:04.256) 0:01:46.704 ********** 2026-03-09 01:09:07.162190 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:07.162194 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:07.162198 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:07.162201 | orchestrator | 2026-03-09 01:09:07.162205 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-09 01:09:07.162209 | orchestrator | Monday 09 March 2026 01:07:24 +0000 (0:00:00.636) 0:01:47.341 ********** 2026-03-09 01:09:07.162213 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:07.162217 | orchestrator | 2026-03-09 01:09:07.162221 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-09 01:09:07.162224 | orchestrator | Monday 09 March 2026 01:07:27 +0000 (0:00:02.375) 0:01:49.716 ********** 2026-03-09 01:09:07.162228 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:07.162232 | orchestrator | 2026-03-09 01:09:07.162236 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-09 01:09:07.162242 | orchestrator | Monday 09 March 2026 01:07:29 +0000 (0:00:02.754) 0:01:52.470 ********** 2026-03-09 01:09:07.162246 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:07.162250 | orchestrator | 2026-03-09 01:09:07.162254 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-09 01:09:07.162258 | orchestrator | Monday 09 March 2026 01:07:51 +0000 (0:00:21.797) 0:02:14.268 ********** 2026-03-09 01:09:07.162262 | orchestrator | 2026-03-09 01:09:07.162268 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-09 01:09:07.162272 | orchestrator | Monday 09 March 2026 01:07:51 +0000 (0:00:00.070) 0:02:14.338 ********** 2026-03-09 01:09:07.162276 | orchestrator | 2026-03-09 01:09:07.162280 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-09 01:09:07.162284 | orchestrator | Monday 09 March 2026 01:07:51 +0000 (0:00:00.071) 0:02:14.410 ********** 2026-03-09 01:09:07.162287 | orchestrator | 2026-03-09 01:09:07.162291 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-09 01:09:07.162297 | orchestrator | Monday 09 March 2026 01:07:51 +0000 (0:00:00.069) 0:02:14.480 ********** 2026-03-09 01:09:07.162301 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:07.162305 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:07.162309 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:07.162313 | orchestrator | 2026-03-09 01:09:07.162316 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-09 01:09:07.162320 | orchestrator | Monday 09 March 2026 01:08:13 +0000 (0:00:21.509) 0:02:35.989 ********** 2026-03-09 01:09:07.162324 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:07.162328 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:07.162332 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:07.162336 | orchestrator | 2026-03-09 01:09:07.162340 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-09 01:09:07.162343 | orchestrator | Monday 09 March 2026 01:08:24 +0000 (0:00:11.503) 0:02:47.493 ********** 2026-03-09 01:09:07.162347 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:07.162351 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:07.162355 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:07.162359 | orchestrator | 2026-03-09 01:09:07.162363 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-09 01:09:07.162406 | orchestrator | Monday 09 March 2026 01:08:50 +0000 (0:00:25.995) 0:03:13.488 ********** 2026-03-09 01:09:07.162411 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:07.162414 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:07.162418 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:07.162422 | orchestrator | 2026-03-09 01:09:07.162426 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-09 01:09:07.162430 | orchestrator | Monday 09 March 2026 01:09:03 +0000 (0:00:12.883) 0:03:26.371 ********** 2026-03-09 01:09:07.162433 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:07.162437 | orchestrator | 2026-03-09 01:09:07.162441 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:09:07.162445 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 01:09:07.162450 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:09:07.162454 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:09:07.162458 | orchestrator | 2026-03-09 01:09:07.162462 | orchestrator | 2026-03-09 01:09:07.162465 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:09:07.162469 | orchestrator | Monday 09 March 2026 01:09:04 +0000 (0:00:00.832) 0:03:27.204 ********** 2026-03-09 01:09:07.162473 | orchestrator | =============================================================================== 2026-03-09 01:09:07.162477 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.00s 2026-03-09 01:09:07.162481 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.80s 2026-03-09 01:09:07.162484 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 21.51s 2026-03-09 01:09:07.162488 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 19.96s 2026-03-09 01:09:07.162498 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.88s 2026-03-09 01:09:07.162503 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.50s 2026-03-09 01:09:07.162507 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.71s 2026-03-09 01:09:07.162511 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.69s 2026-03-09 01:09:07.162515 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.32s 2026-03-09 01:09:07.162519 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.26s 2026-03-09 01:09:07.162523 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.82s 2026-03-09 01:09:07.162527 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.82s 2026-03-09 01:09:07.162530 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.78s 2026-03-09 01:09:07.162534 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 4.29s 2026-03-09 01:09:07.162538 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.26s 2026-03-09 01:09:07.162542 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.06s 2026-03-09 01:09:07.162546 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.90s 2026-03-09 01:09:07.162552 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.86s 2026-03-09 01:09:07.162556 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.43s 2026-03-09 01:09:07.162560 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.91s 2026-03-09 01:09:07.162564 | orchestrator | 2026-03-09 01:09:07 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:09:07.162568 | orchestrator | 2026-03-09 01:09:07 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:07.163185 | orchestrator | 2026-03-09 01:09:07 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:07.163796 | orchestrator | 2026-03-09 01:09:07 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:09:07.164317 | orchestrator | 2026-03-09 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:10.216819 | orchestrator | 2026-03-09 01:09:10 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:09:10.219699 | orchestrator | 2026-03-09 01:09:10 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:10.222707 | orchestrator | 2026-03-09 01:09:10 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:10.225798 | orchestrator | 2026-03-09 01:09:10 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:09:10.225854 | orchestrator | 2026-03-09 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:13.268675 | orchestrator | 2026-03-09 01:09:13 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:09:13.271728 | orchestrator | 2026-03-09 01:09:13 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:13.273118 | orchestrator | 2026-03-09 01:09:13 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:13.274711 | orchestrator | 2026-03-09 01:09:13 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state STARTED 2026-03-09 01:09:13.274780 | orchestrator | 2026-03-09 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:16.324342 | orchestrator | 2026-03-09 01:09:16 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state STARTED 2026-03-09 01:09:16.324950 | orchestrator | 2026-03-09 01:09:16 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:16.325596 | orchestrator | 2026-03-09 01:09:16 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:16.328029 | orchestrator | 2026-03-09 01:09:16 | INFO  | Task 2470aeb4-5cf8-458e-981d-111e83a01269 is in state SUCCESS 2026-03-09 01:09:16.329272 | orchestrator | 2026-03-09 01:09:16.329320 | orchestrator | 2026-03-09 01:09:16.329327 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:09:16.329333 | orchestrator | 2026-03-09 01:09:16.329346 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:09:16.329351 | orchestrator | Monday 09 March 2026 01:05:17 +0000 (0:00:00.348) 0:00:00.348 ********** 2026-03-09 01:09:16.329357 | orchestrator | ok: [testbed-manager] 2026-03-09 01:09:16.329369 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:09:16.329421 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:09:16.329426 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:09:16.329432 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:09:16.329437 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:09:16.329442 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:09:16.329448 | orchestrator | 2026-03-09 01:09:16.329453 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:09:16.329464 | orchestrator | Monday 09 March 2026 01:05:18 +0000 (0:00:01.108) 0:00:01.456 ********** 2026-03-09 01:09:16.329470 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-09 01:09:16.329476 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-09 01:09:16.329523 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-09 01:09:16.329538 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-09 01:09:16.329548 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-09 01:09:16.329556 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-09 01:09:16.329564 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-09 01:09:16.329650 | orchestrator | 2026-03-09 01:09:16.329658 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-09 01:09:16.329664 | orchestrator | 2026-03-09 01:09:16.329669 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-09 01:09:16.329674 | orchestrator | Monday 09 March 2026 01:05:19 +0000 (0:00:00.824) 0:00:02.281 ********** 2026-03-09 01:09:16.329680 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:09:16.329686 | orchestrator | 2026-03-09 01:09:16.329692 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-09 01:09:16.329697 | orchestrator | Monday 09 March 2026 01:05:21 +0000 (0:00:02.073) 0:00:04.355 ********** 2026-03-09 01:09:16.329704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.329721 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-09 01:09:16.329739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.329847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.329885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.329907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.329918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.329952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.329968 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.329986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.329993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.330040 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330055 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330071 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330107 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.330115 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-09 01:09:16.330126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330135 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330142 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330202 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330212 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330220 | orchestrator | 2026-03-09 01:09:16.330230 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-09 01:09:16.330238 | orchestrator | Monday 09 March 2026 01:05:26 +0000 (0:00:04.927) 0:00:09.283 ********** 2026-03-09 01:09:16.330247 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:09:16.330262 | orchestrator | 2026-03-09 01:09:16.330270 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-09 01:09:16.330278 | orchestrator | Monday 09 March 2026 01:05:27 +0000 (0:00:01.756) 0:00:11.040 ********** 2026-03-09 01:09:16.330290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.330299 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.330309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.330318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.330332 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-09 01:09:16.330342 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.330351 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.330365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330400 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.330409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330441 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330452 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330470 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330518 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330528 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330534 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.330564 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-09 01:09:16.330573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330594 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.330600 | orchestrator | 2026-03-09 01:09:16.330606 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-09 01:09:16.330612 | orchestrator | Monday 09 March 2026 01:05:35 +0000 (0:00:07.711) 0:00:18.751 ********** 2026-03-09 01:09:16.330619 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-09 01:09:16.330625 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:16.330631 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.330640 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-09 01:09:16.330650 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.330656 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:16.330662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:16.330668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.330676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.330682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.330687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.330771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:16.330777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:16.330786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.330791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.330797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.330805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.330811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.330816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.330825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.330834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.330840 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:16.330846 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:16.330851 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:16.330856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:16.330862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.330869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.330875 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:16.330880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:16.330886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.330896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.330905 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:16.330911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:16.330916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.330922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.330927 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:16.330933 | orchestrator | 2026-03-09 01:09:16.330938 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-09 01:09:16.330944 | orchestrator | Monday 09 March 2026 01:05:37 +0000 (0:00:02.146) 0:00:20.897 ********** 2026-03-09 01:09:16.330952 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-09 01:09:16.330959 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:16.330969 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.330994 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-09 01:09:16.331005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:16.331015 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.331058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.331070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.331076 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:16.331082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.331137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.331144 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:16.331150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:16.331156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.331161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.331167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.331175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.331181 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:16.331186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:16.331195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.331205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.331211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.331217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:16.331222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:16.331228 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:16.331236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.331242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.331251 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:16.331256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:16.331262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.331271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.331276 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:16.331282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:16.331289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.331298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:09:16.331312 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:16.331337 | orchestrator | 2026-03-09 01:09:16.331345 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-09 01:09:16.331354 | orchestrator | Monday 09 March 2026 01:05:40 +0000 (0:00:02.952) 0:00:23.850 ********** 2026-03-09 01:09:16.331366 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-09 01:09:16.331406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.331421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.331431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.331441 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.331449 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.331455 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.331463 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.331473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.331479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.331488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.331494 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.331500 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.331505 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.331513 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.331577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.331585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.331591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.331600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.331606 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.331613 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-09 01:09:16.331618 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.331630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.331636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.331645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.331651 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.331657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.331663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.331668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.331677 | orchestrator | 2026-03-09 01:09:16.331683 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-09 01:09:16.331688 | orchestrator | Monday 09 March 2026 01:05:47 +0000 (0:00:06.894) 0:00:30.745 ********** 2026-03-09 01:09:16.331694 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 01:09:16.331699 | orchestrator | 2026-03-09 01:09:16.331707 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-09 01:09:16.331712 | orchestrator | Monday 09 March 2026 01:05:49 +0000 (0:00:01.644) 0:00:32.389 ********** 2026-03-09 01:09:16.331718 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094330, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2163968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331724 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094330, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2163968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331734 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094330, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2163968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331740 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094330, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2163968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331746 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094330, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2163968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.331751 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094360, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2221172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331765 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094330, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2163968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331771 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094360, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2221172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331776 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094330, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2163968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331804 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094360, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2221172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331810 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094360, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2221172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331815 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094323, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2162862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331824 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094360, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2221172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331832 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094323, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2162862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331838 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094360, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2221172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331843 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094347, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2203968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331853 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094323, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2162862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331859 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094323, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2162862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331865 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094323, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2162862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331874 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094347, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2203968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331882 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094317, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2143967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331887 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094347, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2203968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331893 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094347, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2203968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331902 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094323, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2162862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331908 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094347, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2203968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331914 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094332, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2173967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331923 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094317, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2143967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331931 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094344, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.220053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331936 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094338, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2183967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331942 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094332, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2173967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331951 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094360, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2221172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.331957 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094344, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.220053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331965 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094327, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2163968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331971 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094317, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2143967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331977 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094317, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2143967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331985 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094317, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2143967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331990 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094347, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2203968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.331996 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094338, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2183967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332005 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094358, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2218719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332015 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094332, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2173967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332020 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094323, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2162862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332026 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094332, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2173967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332033 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094344, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.220053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332039 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094332, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2173967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332044 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094309, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2126002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332059 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094317, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2143967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332067 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094327, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2163968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332073 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094376, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2243733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332079 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094344, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.220053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332086 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094338, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2183967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332092 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094344, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.220053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332097 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094356, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2215827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332106 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094332, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2173967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332115 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094358, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2218719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332121 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094344, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.220053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332126 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094338, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2183967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332136 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094327, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2163968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332141 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094338, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2183967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332147 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094347, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2203968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332159 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094327, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2163968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332165 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094320, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2151904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332170 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094338, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2183967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332175 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094309, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2126002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332183 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094358, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2218719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332189 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094311, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2132776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332194 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094358, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2218719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332206 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094327, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2163968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332212 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094309, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2126002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332217 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094376, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2243733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332223 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094343, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2193968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332230 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094327, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2163968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332236 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094376, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2243733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332241 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094341, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2192256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332253 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094356, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2215827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332259 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094309, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2126002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332264 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094358, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2218719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332270 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094356, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2215827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332278 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094358, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2218719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332283 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094320, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2151904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332291 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094317, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2143967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332301 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094373, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2233968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332307 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:16.332313 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094309, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2126002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332318 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094320, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2151904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332324 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094376, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2243733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332332 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094311, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2132776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332338 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094376, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2243733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332346 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094309, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2126002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332355 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094356, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2215827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332361 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094343, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2193968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332366 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094311, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2132776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332448 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094356, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2215827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332474 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094320, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2151904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332480 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094343, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2193968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332491 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094376, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2243733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332503 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094341, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2192256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332508 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094332, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2173967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332514 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094320, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2151904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332520 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094311, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2132776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332527 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094373, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2233968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332533 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:16.332543 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094356, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2215827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332548 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094341, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2192256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332557 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094311, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2132776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332563 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094320, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2151904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332569 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094343, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2193968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332574 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094344, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.220053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332582 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094373, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2233968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332593 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:16.332598 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094343, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2193968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332604 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094311, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2132776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332612 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094341, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2192256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332618 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094341, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2192256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332624 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094343, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2193968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332629 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094373, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2233968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332635 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:16.332643 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094373, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2233968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332699 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:16.332707 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094338, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2183967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332713 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094341, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2192256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332723 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094373, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2233968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:16.332728 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:16.332734 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094327, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2163968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332740 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094358, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2218719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332745 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094309, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2126002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332757 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094376, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2243733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332763 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094356, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2215827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332768 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094320, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2151904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332777 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir2026-03-09 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:16.332784 | orchestrator | ': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094311, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2132776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332790 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094343, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2193968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332796 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094341, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2192256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332801 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094373, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2233968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:16.332810 | orchestrator | 2026-03-09 01:09:16.332816 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-09 01:09:16.332823 | orchestrator | Monday 09 March 2026 01:06:28 +0000 (0:00:39.093) 0:01:11.482 ********** 2026-03-09 01:09:16.332828 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 01:09:16.332834 | orchestrator | 2026-03-09 01:09:16.332839 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-09 01:09:16.332844 | orchestrator | Monday 09 March 2026 01:06:29 +0000 (0:00:00.850) 0:01:12.332 ********** 2026-03-09 01:09:16.332850 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:16.332856 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:16.332861 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-09 01:09:16.332866 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:16.332872 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-09 01:09:16.332877 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 01:09:16.332882 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:16.332887 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:16.332893 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-09 01:09:16.332898 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:16.332903 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-09 01:09:16.332908 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:16.332913 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:16.332919 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-09 01:09:16.332924 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:16.332930 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-09 01:09:16.332935 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:16.332940 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:16.332945 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-09 01:09:16.332950 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:16.332955 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-09 01:09:16.332960 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:16.332968 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:16.332973 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-09 01:09:16.332978 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:16.332983 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-09 01:09:16.332988 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:16.332993 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:16.332998 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-09 01:09:16.333003 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:16.333008 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-09 01:09:16.333013 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:16.333034 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:16.333043 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-09 01:09:16.333048 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:16.333053 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-09 01:09:16.333059 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:09:16.333063 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-09 01:09:16.333068 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-09 01:09:16.333073 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 01:09:16.333078 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 01:09:16.333083 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 01:09:16.333088 | orchestrator | 2026-03-09 01:09:16.333093 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-09 01:09:16.333098 | orchestrator | Monday 09 March 2026 01:06:33 +0000 (0:00:04.753) 0:01:17.086 ********** 2026-03-09 01:09:16.333109 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:09:16.333115 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:16.333120 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:09:16.333124 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:16.333141 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:09:16.333147 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:16.333152 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:09:16.333158 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:16.333163 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:09:16.333168 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:16.333173 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:09:16.333178 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:16.333183 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-09 01:09:16.333187 | orchestrator | 2026-03-09 01:09:16.333192 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-09 01:09:16.333200 | orchestrator | Monday 09 March 2026 01:07:01 +0000 (0:00:27.621) 0:01:44.708 ********** 2026-03-09 01:09:16.333205 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:09:16.333210 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:16.333215 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:09:16.333220 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:09:16.333225 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:16.333241 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:16.333246 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:09:16.333251 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:16.333263 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:09:16.333269 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:16.333274 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:09:16.333279 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:16.333296 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-09 01:09:16.333301 | orchestrator | 2026-03-09 01:09:16.333307 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-09 01:09:16.333317 | orchestrator | Monday 09 March 2026 01:07:06 +0000 (0:00:05.355) 0:01:50.064 ********** 2026-03-09 01:09:16.333322 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:09:16.333327 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:09:16.333332 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:16.333337 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:09:16.333342 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:16.333354 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:16.333364 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-09 01:09:16.333369 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:09:16.333403 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:16.333409 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:09:16.333413 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:16.333418 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:09:16.333424 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:16.333429 | orchestrator | 2026-03-09 01:09:16.333434 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-09 01:09:16.333439 | orchestrator | Monday 09 March 2026 01:07:10 +0000 (0:00:03.884) 0:01:53.949 ********** 2026-03-09 01:09:16.333444 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 01:09:16.333449 | orchestrator | 2026-03-09 01:09:16.333454 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-09 01:09:16.333459 | orchestrator | Monday 09 March 2026 01:07:11 +0000 (0:00:01.120) 0:01:55.069 ********** 2026-03-09 01:09:16.333464 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:16.333469 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:16.333489 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:16.333495 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:16.333500 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:16.333505 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:16.333510 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:16.333514 | orchestrator | 2026-03-09 01:09:16.333520 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-09 01:09:16.333525 | orchestrator | Monday 09 March 2026 01:07:12 +0000 (0:00:01.034) 0:01:56.103 ********** 2026-03-09 01:09:16.333530 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:16.333534 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:16.333539 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:16.333544 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:16.333549 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:16.333554 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:16.333559 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:16.333564 | orchestrator | 2026-03-09 01:09:16.333569 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-09 01:09:16.333587 | orchestrator | Monday 09 March 2026 01:07:17 +0000 (0:00:04.372) 0:02:00.476 ********** 2026-03-09 01:09:16.333592 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:09:16.333597 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:16.333602 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:09:16.333607 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:16.333616 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:09:16.333621 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:16.333629 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:09:16.333635 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:16.333640 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:09:16.333645 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:16.333650 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:09:16.333655 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:16.333660 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:09:16.333664 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:16.333669 | orchestrator | 2026-03-09 01:09:16.333681 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-09 01:09:16.333686 | orchestrator | Monday 09 March 2026 01:07:21 +0000 (0:00:03.816) 0:02:04.292 ********** 2026-03-09 01:09:16.333691 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:09:16.333696 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:16.333701 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:09:16.333706 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:16.333711 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:09:16.333716 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:16.333721 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:09:16.333726 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:16.333731 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:09:16.333736 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:16.333741 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:09:16.333746 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:16.333752 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-09 01:09:16.333757 | orchestrator | 2026-03-09 01:09:16.333769 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-09 01:09:16.333774 | orchestrator | Monday 09 March 2026 01:07:23 +0000 (0:00:02.674) 0:02:06.967 ********** 2026-03-09 01:09:16.333779 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:16.333784 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-09 01:09:16.333789 | orchestrator | due to this access issue: 2026-03-09 01:09:16.333794 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-09 01:09:16.333799 | orchestrator | not a directory 2026-03-09 01:09:16.333804 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 01:09:16.333809 | orchestrator | 2026-03-09 01:09:16.333814 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-09 01:09:16.333823 | orchestrator | Monday 09 March 2026 01:07:25 +0000 (0:00:01.609) 0:02:08.577 ********** 2026-03-09 01:09:16.333829 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:16.333834 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:16.333839 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:16.333843 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:16.333848 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:16.333853 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:16.333858 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:16.333866 | orchestrator | 2026-03-09 01:09:16.333871 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-09 01:09:16.333876 | orchestrator | Monday 09 March 2026 01:07:26 +0000 (0:00:01.241) 0:02:09.819 ********** 2026-03-09 01:09:16.333881 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:16.333886 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:16.333891 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:16.333896 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:16.333905 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:16.333910 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:16.333915 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:16.333920 | orchestrator | 2026-03-09 01:09:16.333925 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-09 01:09:16.333935 | orchestrator | Monday 09 March 2026 01:07:28 +0000 (0:00:01.365) 0:02:11.185 ********** 2026-03-09 01:09:16.333941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.333949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.333955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.333967 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-09 01:09:16.334034 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.334042 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.334051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.334056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.334061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.334071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.334076 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:16.334081 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.334090 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.334099 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.334105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.334111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.334125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.334130 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.334135 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.334141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.334151 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.334157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.334162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.334167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:16.334175 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-09 01:09:16.334181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.334189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.334201 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.334206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:16.334211 | orchestrator | 2026-03-09 01:09:16.334217 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-09 01:09:16.334222 | orchestrator | Monday 09 March 2026 01:07:33 +0000 (0:00:05.215) 0:02:16.401 ********** 2026-03-09 01:09:16.334227 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-09 01:09:16.334232 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:16.334237 | orchestrator | 2026-03-09 01:09:16.334242 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:09:16.334247 | orchestrator | Monday 09 March 2026 01:07:34 +0000 (0:00:01.696) 0:02:18.098 ********** 2026-03-09 01:09:16.334252 | orchestrator | 2026-03-09 01:09:16.334257 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:09:16.334262 | orchestrator | Monday 09 March 2026 01:07:35 +0000 (0:00:00.071) 0:02:18.170 ********** 2026-03-09 01:09:16.334266 | orchestrator | 2026-03-09 01:09:16.334271 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:09:16.334276 | orchestrator | Monday 09 March 2026 01:07:35 +0000 (0:00:00.067) 0:02:18.237 ********** 2026-03-09 01:09:16.334281 | orchestrator | 2026-03-09 01:09:16.334286 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:09:16.334291 | orchestrator | Monday 09 March 2026 01:07:35 +0000 (0:00:00.093) 0:02:18.331 ********** 2026-03-09 01:09:16.334296 | orchestrator | 2026-03-09 01:09:16.334301 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:09:16.334306 | orchestrator | Monday 09 March 2026 01:07:35 +0000 (0:00:00.291) 0:02:18.622 ********** 2026-03-09 01:09:16.334311 | orchestrator | 2026-03-09 01:09:16.334316 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:09:16.334321 | orchestrator | Monday 09 March 2026 01:07:35 +0000 (0:00:00.079) 0:02:18.702 ********** 2026-03-09 01:09:16.334326 | orchestrator | 2026-03-09 01:09:16.334390 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:09:16.334401 | orchestrator | Monday 09 March 2026 01:07:35 +0000 (0:00:00.072) 0:02:18.774 ********** 2026-03-09 01:09:16.334406 | orchestrator | 2026-03-09 01:09:16.334410 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-09 01:09:16.334415 | orchestrator | Monday 09 March 2026 01:07:35 +0000 (0:00:00.090) 0:02:18.865 ********** 2026-03-09 01:09:16.334420 | orchestrator | changed: [testbed-manager] 2026-03-09 01:09:16.334429 | orchestrator | 2026-03-09 01:09:16.334434 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-09 01:09:16.334439 | orchestrator | Monday 09 March 2026 01:07:55 +0000 (0:00:19.616) 0:02:38.482 ********** 2026-03-09 01:09:16.334444 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:09:16.334448 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:16.334453 | orchestrator | changed: [testbed-manager] 2026-03-09 01:09:16.334458 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:16.334463 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:16.334468 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:09:16.334473 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:09:16.334478 | orchestrator | 2026-03-09 01:09:16.334482 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-09 01:09:16.334487 | orchestrator | Monday 09 March 2026 01:08:11 +0000 (0:00:16.458) 0:02:54.941 ********** 2026-03-09 01:09:16.334492 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:16.334497 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:16.334502 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:16.334507 | orchestrator | 2026-03-09 01:09:16.334512 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-09 01:09:16.334516 | orchestrator | Monday 09 March 2026 01:08:18 +0000 (0:00:06.789) 0:03:01.730 ********** 2026-03-09 01:09:16.334521 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:16.334526 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:16.334531 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:16.334536 | orchestrator | 2026-03-09 01:09:16.334541 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-09 01:09:16.334546 | orchestrator | Monday 09 March 2026 01:08:23 +0000 (0:00:05.378) 0:03:07.108 ********** 2026-03-09 01:09:16.334551 | orchestrator | changed: [testbed-manager] 2026-03-09 01:09:16.334556 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:09:16.334560 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:09:16.334565 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:16.334575 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:09:16.334580 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:16.334585 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:16.334590 | orchestrator | 2026-03-09 01:09:16.334595 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-09 01:09:16.334599 | orchestrator | Monday 09 March 2026 01:08:38 +0000 (0:00:14.854) 0:03:21.962 ********** 2026-03-09 01:09:16.334604 | orchestrator | changed: [testbed-manager] 2026-03-09 01:09:16.334609 | orchestrator | 2026-03-09 01:09:16.334614 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-09 01:09:16.334619 | orchestrator | Monday 09 March 2026 01:08:46 +0000 (0:00:08.016) 0:03:29.979 ********** 2026-03-09 01:09:16.334624 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:16.334629 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:16.334634 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:16.334639 | orchestrator | 2026-03-09 01:09:16.334644 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-09 01:09:16.334648 | orchestrator | Monday 09 March 2026 01:08:53 +0000 (0:00:06.780) 0:03:36.759 ********** 2026-03-09 01:09:16.334653 | orchestrator | changed: [testbed-manager] 2026-03-09 01:09:16.334658 | orchestrator | 2026-03-09 01:09:16.334663 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-09 01:09:16.334668 | orchestrator | Monday 09 March 2026 01:09:00 +0000 (0:00:06.460) 0:03:43.220 ********** 2026-03-09 01:09:16.334672 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:09:16.334678 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:09:16.334683 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:09:16.334687 | orchestrator | 2026-03-09 01:09:16.334692 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:09:16.334697 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-09 01:09:16.334706 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-09 01:09:16.334711 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-09 01:09:16.334716 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-09 01:09:16.334721 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:09:16.334726 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:09:16.334730 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:09:16.334735 | orchestrator | 2026-03-09 01:09:16.334740 | orchestrator | 2026-03-09 01:09:16.334748 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:09:16.334753 | orchestrator | Monday 09 March 2026 01:09:15 +0000 (0:00:15.079) 0:03:58.299 ********** 2026-03-09 01:09:16.334758 | orchestrator | =============================================================================== 2026-03-09 01:09:16.334763 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 39.09s 2026-03-09 01:09:16.334768 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 27.62s 2026-03-09 01:09:16.334773 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 19.62s 2026-03-09 01:09:16.334777 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.46s 2026-03-09 01:09:16.334782 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 15.08s 2026-03-09 01:09:16.334787 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.85s 2026-03-09 01:09:16.334792 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.02s 2026-03-09 01:09:16.334796 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.71s 2026-03-09 01:09:16.334801 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.89s 2026-03-09 01:09:16.334806 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.79s 2026-03-09 01:09:16.334811 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.78s 2026-03-09 01:09:16.334816 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.46s 2026-03-09 01:09:16.334821 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.38s 2026-03-09 01:09:16.334826 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.36s 2026-03-09 01:09:16.334831 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.22s 2026-03-09 01:09:16.334836 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.93s 2026-03-09 01:09:16.334840 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 4.75s 2026-03-09 01:09:16.334845 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 4.37s 2026-03-09 01:09:16.334850 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.88s 2026-03-09 01:09:16.334856 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.82s 2026-03-09 01:09:19.352800 | orchestrator | 2026-03-09 01:09:19 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:09:19.352919 | orchestrator | 2026-03-09 01:09:19 | INFO  | Task af40edbd-d1bb-493b-acd7-e0d086dec7bf is in state SUCCESS 2026-03-09 01:09:19.353870 | orchestrator | 2026-03-09 01:09:19.353910 | orchestrator | 2026-03-09 01:09:19.353921 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:09:19.353932 | orchestrator | 2026-03-09 01:09:19.353942 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:09:19.353952 | orchestrator | Monday 09 March 2026 01:05:27 +0000 (0:00:00.359) 0:00:00.359 ********** 2026-03-09 01:09:19.353963 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:09:19.353973 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:09:19.353982 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:09:19.353992 | orchestrator | 2026-03-09 01:09:19.354002 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:09:19.354054 | orchestrator | Monday 09 March 2026 01:05:27 +0000 (0:00:00.346) 0:00:00.705 ********** 2026-03-09 01:09:19.354068 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-09 01:09:19.354078 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-09 01:09:19.354088 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-09 01:09:19.354097 | orchestrator | 2026-03-09 01:09:19.354138 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-09 01:09:19.354149 | orchestrator | 2026-03-09 01:09:19.354166 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-09 01:09:19.354181 | orchestrator | Monday 09 March 2026 01:05:28 +0000 (0:00:00.493) 0:00:01.199 ********** 2026-03-09 01:09:19.354206 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:09:19.354284 | orchestrator | 2026-03-09 01:09:19.354304 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-09 01:09:19.354320 | orchestrator | Monday 09 March 2026 01:05:29 +0000 (0:00:01.223) 0:00:02.422 ********** 2026-03-09 01:09:19.354337 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-09 01:09:19.354354 | orchestrator | 2026-03-09 01:09:19.354371 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-09 01:09:19.354441 | orchestrator | Monday 09 March 2026 01:05:33 +0000 (0:00:04.601) 0:00:07.023 ********** 2026-03-09 01:09:19.354451 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-09 01:09:19.354461 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-09 01:09:19.354473 | orchestrator | 2026-03-09 01:09:19.354485 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-09 01:09:19.354497 | orchestrator | Monday 09 March 2026 01:05:41 +0000 (0:00:07.756) 0:00:14.779 ********** 2026-03-09 01:09:19.354508 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-09 01:09:19.354520 | orchestrator | 2026-03-09 01:09:19.354531 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-09 01:09:19.354543 | orchestrator | Monday 09 March 2026 01:05:45 +0000 (0:00:04.276) 0:00:19.056 ********** 2026-03-09 01:09:19.354568 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:09:19.354581 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-09 01:09:19.354593 | orchestrator | 2026-03-09 01:09:19.354606 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-09 01:09:19.354618 | orchestrator | Monday 09 March 2026 01:05:51 +0000 (0:00:05.130) 0:00:24.187 ********** 2026-03-09 01:09:19.354630 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:09:19.354642 | orchestrator | 2026-03-09 01:09:19.354654 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-09 01:09:19.354665 | orchestrator | Monday 09 March 2026 01:05:55 +0000 (0:00:04.422) 0:00:28.610 ********** 2026-03-09 01:09:19.354678 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-09 01:09:19.354689 | orchestrator | 2026-03-09 01:09:19.354701 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-09 01:09:19.354726 | orchestrator | Monday 09 March 2026 01:06:00 +0000 (0:00:05.082) 0:00:33.693 ********** 2026-03-09 01:09:19.354758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:09:19.354778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:09:19.354791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:09:19.354808 | orchestrator | 2026-03-09 01:09:19.354818 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-09 01:09:19.354828 | orchestrator | Monday 09 March 2026 01:06:09 +0000 (0:00:08.953) 0:00:42.646 ********** 2026-03-09 01:09:19.354838 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:09:19.354848 | orchestrator | 2026-03-09 01:09:19.354868 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-09 01:09:19.354890 | orchestrator | Monday 09 March 2026 01:06:10 +0000 (0:00:00.926) 0:00:43.572 ********** 2026-03-09 01:09:19.354906 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:19.354922 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:19.354938 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:19.354955 | orchestrator | 2026-03-09 01:09:19.354971 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-09 01:09:19.354988 | orchestrator | Monday 09 March 2026 01:06:15 +0000 (0:00:05.433) 0:00:49.006 ********** 2026-03-09 01:09:19.355004 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:09:19.355019 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:09:19.355034 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:09:19.355050 | orchestrator | 2026-03-09 01:09:19.355066 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-09 01:09:19.355084 | orchestrator | Monday 09 March 2026 01:06:18 +0000 (0:00:02.381) 0:00:51.387 ********** 2026-03-09 01:09:19.355101 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:09:19.355118 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:09:19.355135 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:09:19.355146 | orchestrator | 2026-03-09 01:09:19.355160 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-09 01:09:19.355176 | orchestrator | Monday 09 March 2026 01:06:19 +0000 (0:00:01.686) 0:00:53.074 ********** 2026-03-09 01:09:19.355193 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:09:19.355211 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:09:19.355226 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:09:19.355243 | orchestrator | 2026-03-09 01:09:19.355333 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-09 01:09:19.355349 | orchestrator | Monday 09 March 2026 01:06:21 +0000 (0:00:01.306) 0:00:54.380 ********** 2026-03-09 01:09:19.355369 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:19.355402 | orchestrator | 2026-03-09 01:09:19.355413 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-09 01:09:19.355423 | orchestrator | Monday 09 March 2026 01:06:21 +0000 (0:00:00.141) 0:00:54.522 ********** 2026-03-09 01:09:19.355433 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:19.355442 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:19.355452 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:19.355462 | orchestrator | 2026-03-09 01:09:19.355478 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-09 01:09:19.355488 | orchestrator | Monday 09 March 2026 01:06:22 +0000 (0:00:00.692) 0:00:55.214 ********** 2026-03-09 01:09:19.355498 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:09:19.355507 | orchestrator | 2026-03-09 01:09:19.355517 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-09 01:09:19.355527 | orchestrator | Monday 09 March 2026 01:06:22 +0000 (0:00:00.656) 0:00:55.870 ********** 2026-03-09 01:09:19.355549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:09:19.355562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:09:19.355584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:09:19.355595 | orchestrator | 2026-03-09 01:09:19.355605 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-09 01:09:19.355615 | orchestrator | Monday 09 March 2026 01:06:29 +0000 (0:00:06.454) 0:01:02.325 ********** 2026-03-09 01:09:19.355633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:09:19.355650 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:19.355665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:09:19.355676 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:19.355694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:09:19.355705 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:19.355736 | orchestrator | 2026-03-09 01:09:19.355747 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-09 01:09:19.355756 | orchestrator | Monday 09 March 2026 01:06:36 +0000 (0:00:07.377) 0:01:09.703 ********** 2026-03-09 01:09:19.355782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:09:19.355799 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:19.355815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:09:19.355826 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:19.355837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:09:19.355852 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:19.355862 | orchestrator | 2026-03-09 01:09:19.355872 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-09 01:09:19.355881 | orchestrator | Monday 09 March 2026 01:06:42 +0000 (0:00:05.456) 0:01:15.159 ********** 2026-03-09 01:09:19.355901 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:19.355911 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:19.355921 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:19.355931 | orchestrator | 2026-03-09 01:09:19.355941 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-09 01:09:19.355950 | orchestrator | Monday 09 March 2026 01:06:47 +0000 (0:00:05.487) 0:01:20.646 ********** 2026-03-09 01:09:19.355961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:09:19.355980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:09:19.356001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:09:19.356012 | orchestrator | 2026-03-09 01:09:19.356022 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-09 01:09:19.356032 | orchestrator | Monday 09 March 2026 01:06:53 +0000 (0:00:06.272) 0:01:26.919 ********** 2026-03-09 01:09:19.356042 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:19.356051 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:19.356061 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:19.356071 | orchestrator | 2026-03-09 01:09:19.356080 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-09 01:09:19.356090 | orchestrator | Monday 09 March 2026 01:07:03 +0000 (0:00:10.147) 0:01:37.066 ********** 2026-03-09 01:09:19.356100 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:19.356109 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:19.356119 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:19.356129 | orchestrator | 2026-03-09 01:09:19.356138 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-09 01:09:19.356148 | orchestrator | Monday 09 March 2026 01:07:11 +0000 (0:00:07.803) 0:01:44.870 ********** 2026-03-09 01:09:19.356163 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:19.356427 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:19.356445 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:19.356455 | orchestrator | 2026-03-09 01:09:19.356465 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-09 01:09:19.356475 | orchestrator | Monday 09 March 2026 01:07:19 +0000 (0:00:07.321) 0:01:52.192 ********** 2026-03-09 01:09:19.356485 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:19.356494 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:19.356504 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:19.356514 | orchestrator | 2026-03-09 01:09:19.356523 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-09 01:09:19.356533 | orchestrator | Monday 09 March 2026 01:07:24 +0000 (0:00:05.884) 0:01:58.077 ********** 2026-03-09 01:09:19.356543 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:19.356552 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:19.356562 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:19.356572 | orchestrator | 2026-03-09 01:09:19.356581 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-09 01:09:19.356591 | orchestrator | Monday 09 March 2026 01:07:29 +0000 (0:00:04.927) 0:02:03.004 ********** 2026-03-09 01:09:19.356601 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:19.356610 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:19.356620 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:19.356630 | orchestrator | 2026-03-09 01:09:19.356639 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-09 01:09:19.356649 | orchestrator | Monday 09 March 2026 01:07:30 +0000 (0:00:00.507) 0:02:03.511 ********** 2026-03-09 01:09:19.356659 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-09 01:09:19.356669 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:19.356679 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-09 01:09:19.356689 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:19.356699 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-09 01:09:19.356708 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:19.356718 | orchestrator | 2026-03-09 01:09:19.356728 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-09 01:09:19.356737 | orchestrator | Monday 09 March 2026 01:07:35 +0000 (0:00:05.051) 0:02:08.562 ********** 2026-03-09 01:09:19.356747 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:19.356757 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:19.356766 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:19.356776 | orchestrator | 2026-03-09 01:09:19.356785 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-09 01:09:19.356795 | orchestrator | Monday 09 March 2026 01:07:40 +0000 (0:00:04.819) 0:02:13.382 ********** 2026-03-09 01:09:19.356812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:09:19.356839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:09:19.356856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:09:19.356872 | orchestrator | 2026-03-09 01:09:19.356883 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-09 01:09:19.356892 | orchestrator | Monday 09 March 2026 01:07:44 +0000 (0:00:04.482) 0:02:17.864 ********** 2026-03-09 01:09:19.356902 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:19.356912 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:19.356921 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:19.356936 | orchestrator | 2026-03-09 01:09:19.356952 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-09 01:09:19.356969 | orchestrator | Monday 09 March 2026 01:07:45 +0000 (0:00:00.323) 0:02:18.188 ********** 2026-03-09 01:09:19.356984 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:19.357000 | orchestrator | 2026-03-09 01:09:19.357017 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-09 01:09:19.357034 | orchestrator | Monday 09 March 2026 01:07:47 +0000 (0:00:02.452) 0:02:20.640 ********** 2026-03-09 01:09:19.357048 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:19.357066 | orchestrator | 2026-03-09 01:09:19.357083 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-09 01:09:19.357100 | orchestrator | Monday 09 March 2026 01:07:50 +0000 (0:00:02.710) 0:02:23.351 ********** 2026-03-09 01:09:19.357119 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:19.357137 | orchestrator | 2026-03-09 01:09:19.357154 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-09 01:09:19.357170 | orchestrator | Monday 09 March 2026 01:07:53 +0000 (0:00:02.988) 0:02:26.339 ********** 2026-03-09 01:09:19.357194 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:19.357212 | orchestrator | 2026-03-09 01:09:19.357228 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-09 01:09:19.357255 | orchestrator | Monday 09 March 2026 01:08:27 +0000 (0:00:34.415) 0:03:00.754 ********** 2026-03-09 01:09:19.357271 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:19.357285 | orchestrator | 2026-03-09 01:09:19.357301 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-09 01:09:19.357317 | orchestrator | Monday 09 March 2026 01:08:32 +0000 (0:00:04.802) 0:03:05.557 ********** 2026-03-09 01:09:19.357335 | orchestrator | 2026-03-09 01:09:19.357352 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-09 01:09:19.357368 | orchestrator | Monday 09 March 2026 01:08:32 +0000 (0:00:00.109) 0:03:05.666 ********** 2026-03-09 01:09:19.357450 | orchestrator | 2026-03-09 01:09:19.357461 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-09 01:09:19.357470 | orchestrator | Monday 09 March 2026 01:08:32 +0000 (0:00:00.112) 0:03:05.779 ********** 2026-03-09 01:09:19.357480 | orchestrator | 2026-03-09 01:09:19.357489 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-09 01:09:19.357499 | orchestrator | Monday 09 March 2026 01:08:32 +0000 (0:00:00.072) 0:03:05.851 ********** 2026-03-09 01:09:19.357508 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:19.357518 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:19.357528 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:19.357538 | orchestrator | 2026-03-09 01:09:19.357547 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:09:19.357558 | orchestrator | testbed-node-0 : ok=27  changed=20  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:09:19.357570 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-09 01:09:19.357579 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-09 01:09:19.357599 | orchestrator | 2026-03-09 01:09:19.357609 | orchestrator | 2026-03-09 01:09:19.357619 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:09:19.357629 | orchestrator | Monday 09 March 2026 01:09:16 +0000 (0:00:43.421) 0:03:49.273 ********** 2026-03-09 01:09:19.357639 | orchestrator | =============================================================================== 2026-03-09 01:09:19.357648 | orchestrator | glance : Restart glance-api container ---------------------------------- 43.42s 2026-03-09 01:09:19.357658 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 34.42s 2026-03-09 01:09:19.357668 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 10.15s 2026-03-09 01:09:19.357678 | orchestrator | glance : Ensuring config directories exist ------------------------------ 8.95s 2026-03-09 01:09:19.357694 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 7.80s 2026-03-09 01:09:19.357704 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.76s 2026-03-09 01:09:19.357714 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 7.38s 2026-03-09 01:09:19.357724 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 7.32s 2026-03-09 01:09:19.357733 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.45s 2026-03-09 01:09:19.357743 | orchestrator | glance : Copying over config.json files for services -------------------- 6.27s 2026-03-09 01:09:19.357753 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.88s 2026-03-09 01:09:19.357763 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.49s 2026-03-09 01:09:19.357773 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 5.45s 2026-03-09 01:09:19.357783 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 5.43s 2026-03-09 01:09:19.357792 | orchestrator | service-ks-register : glance | Creating users --------------------------- 5.13s 2026-03-09 01:09:19.357802 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 5.08s 2026-03-09 01:09:19.357812 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.05s 2026-03-09 01:09:19.357821 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.93s 2026-03-09 01:09:19.357831 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.82s 2026-03-09 01:09:19.357841 | orchestrator | glance : Disable log_bin_trust_function_creators function --------------- 4.80s 2026-03-09 01:09:19.357850 | orchestrator | 2026-03-09 01:09:19 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:19.357860 | orchestrator | 2026-03-09 01:09:19 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:19.357870 | orchestrator | 2026-03-09 01:09:19 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:09:19.357881 | orchestrator | 2026-03-09 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:22.392643 | orchestrator | 2026-03-09 01:09:22 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:09:22.393479 | orchestrator | 2026-03-09 01:09:22 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:22.394634 | orchestrator | 2026-03-09 01:09:22 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:22.396023 | orchestrator | 2026-03-09 01:09:22 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:09:22.396037 | orchestrator | 2026-03-09 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:25.440955 | orchestrator | 2026-03-09 01:09:25 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:09:25.442420 | orchestrator | 2026-03-09 01:09:25 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:25.445116 | orchestrator | 2026-03-09 01:09:25 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:25.446848 | orchestrator | 2026-03-09 01:09:25 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:09:25.446981 | orchestrator | 2026-03-09 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:28.492656 | orchestrator | 2026-03-09 01:09:28 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:09:28.495189 | orchestrator | 2026-03-09 01:09:28 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:28.497752 | orchestrator | 2026-03-09 01:09:28 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:28.500039 | orchestrator | 2026-03-09 01:09:28 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:09:28.500073 | orchestrator | 2026-03-09 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:31.539526 | orchestrator | 2026-03-09 01:09:31 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:09:31.540773 | orchestrator | 2026-03-09 01:09:31 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:31.541941 | orchestrator | 2026-03-09 01:09:31 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:31.547070 | orchestrator | 2026-03-09 01:09:31 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:09:31.547147 | orchestrator | 2026-03-09 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:34.578998 | orchestrator | 2026-03-09 01:09:34 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:09:34.581128 | orchestrator | 2026-03-09 01:09:34 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:34.589877 | orchestrator | 2026-03-09 01:09:34 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:34.592065 | orchestrator | 2026-03-09 01:09:34 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:09:34.592513 | orchestrator | 2026-03-09 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:37.636288 | orchestrator | 2026-03-09 01:09:37 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:09:37.638098 | orchestrator | 2026-03-09 01:09:37 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:37.640336 | orchestrator | 2026-03-09 01:09:37 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:37.642409 | orchestrator | 2026-03-09 01:09:37 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:09:37.642449 | orchestrator | 2026-03-09 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:40.684507 | orchestrator | 2026-03-09 01:09:40 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:09:40.686421 | orchestrator | 2026-03-09 01:09:40 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:40.688667 | orchestrator | 2026-03-09 01:09:40 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:40.690333 | orchestrator | 2026-03-09 01:09:40 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:09:40.690366 | orchestrator | 2026-03-09 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:43.723838 | orchestrator | 2026-03-09 01:09:43 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:09:43.725174 | orchestrator | 2026-03-09 01:09:43 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:43.727210 | orchestrator | 2026-03-09 01:09:43 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:43.728646 | orchestrator | 2026-03-09 01:09:43 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:09:43.728770 | orchestrator | 2026-03-09 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:46.775240 | orchestrator | 2026-03-09 01:09:46 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:09:46.776211 | orchestrator | 2026-03-09 01:09:46 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:46.777297 | orchestrator | 2026-03-09 01:09:46 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:46.778305 | orchestrator | 2026-03-09 01:09:46 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:09:46.779180 | orchestrator | 2026-03-09 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:49.809839 | orchestrator | 2026-03-09 01:09:49 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:09:49.810429 | orchestrator | 2026-03-09 01:09:49 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:49.813500 | orchestrator | 2026-03-09 01:09:49 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:49.814517 | orchestrator | 2026-03-09 01:09:49 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:09:49.814544 | orchestrator | 2026-03-09 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:52.859476 | orchestrator | 2026-03-09 01:09:52 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:09:52.860266 | orchestrator | 2026-03-09 01:09:52 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:52.861132 | orchestrator | 2026-03-09 01:09:52 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:52.861857 | orchestrator | 2026-03-09 01:09:52 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:09:52.862088 | orchestrator | 2026-03-09 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:55.887860 | orchestrator | 2026-03-09 01:09:55 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:09:55.889646 | orchestrator | 2026-03-09 01:09:55 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:55.890499 | orchestrator | 2026-03-09 01:09:55 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:55.891614 | orchestrator | 2026-03-09 01:09:55 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:09:55.891677 | orchestrator | 2026-03-09 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:58.926920 | orchestrator | 2026-03-09 01:09:58 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:09:58.927146 | orchestrator | 2026-03-09 01:09:58 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:09:58.928690 | orchestrator | 2026-03-09 01:09:58 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:09:58.929892 | orchestrator | 2026-03-09 01:09:58 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:09:58.930129 | orchestrator | 2026-03-09 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:01.954013 | orchestrator | 2026-03-09 01:10:01 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:01.954518 | orchestrator | 2026-03-09 01:10:01 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:01.955196 | orchestrator | 2026-03-09 01:10:01 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:01.955881 | orchestrator | 2026-03-09 01:10:01 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:01.955906 | orchestrator | 2026-03-09 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:04.985110 | orchestrator | 2026-03-09 01:10:04 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:04.985625 | orchestrator | 2026-03-09 01:10:04 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:04.986459 | orchestrator | 2026-03-09 01:10:04 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:04.987964 | orchestrator | 2026-03-09 01:10:04 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:04.987990 | orchestrator | 2026-03-09 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:08.030302 | orchestrator | 2026-03-09 01:10:08 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:08.031372 | orchestrator | 2026-03-09 01:10:08 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:08.033639 | orchestrator | 2026-03-09 01:10:08 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:08.035726 | orchestrator | 2026-03-09 01:10:08 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:08.035813 | orchestrator | 2026-03-09 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:11.087108 | orchestrator | 2026-03-09 01:10:11 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:11.087515 | orchestrator | 2026-03-09 01:10:11 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:11.088481 | orchestrator | 2026-03-09 01:10:11 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:11.088999 | orchestrator | 2026-03-09 01:10:11 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:11.089031 | orchestrator | 2026-03-09 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:14.140219 | orchestrator | 2026-03-09 01:10:14 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:14.141956 | orchestrator | 2026-03-09 01:10:14 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:14.142738 | orchestrator | 2026-03-09 01:10:14 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:14.143650 | orchestrator | 2026-03-09 01:10:14 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:14.143682 | orchestrator | 2026-03-09 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:17.193183 | orchestrator | 2026-03-09 01:10:17 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:17.193262 | orchestrator | 2026-03-09 01:10:17 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:17.194075 | orchestrator | 2026-03-09 01:10:17 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:17.194845 | orchestrator | 2026-03-09 01:10:17 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:17.194867 | orchestrator | 2026-03-09 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:20.250784 | orchestrator | 2026-03-09 01:10:20 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:20.255056 | orchestrator | 2026-03-09 01:10:20 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:20.259599 | orchestrator | 2026-03-09 01:10:20 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:20.259897 | orchestrator | 2026-03-09 01:10:20 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:20.260581 | orchestrator | 2026-03-09 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:23.295544 | orchestrator | 2026-03-09 01:10:23 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:23.296464 | orchestrator | 2026-03-09 01:10:23 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:23.298404 | orchestrator | 2026-03-09 01:10:23 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:23.299387 | orchestrator | 2026-03-09 01:10:23 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:23.299478 | orchestrator | 2026-03-09 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:26.337523 | orchestrator | 2026-03-09 01:10:26 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:26.338258 | orchestrator | 2026-03-09 01:10:26 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:26.338694 | orchestrator | 2026-03-09 01:10:26 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:26.339843 | orchestrator | 2026-03-09 01:10:26 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:26.339898 | orchestrator | 2026-03-09 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:29.378083 | orchestrator | 2026-03-09 01:10:29 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:29.378448 | orchestrator | 2026-03-09 01:10:29 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:29.379470 | orchestrator | 2026-03-09 01:10:29 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:29.380110 | orchestrator | 2026-03-09 01:10:29 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:29.381538 | orchestrator | 2026-03-09 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:32.421940 | orchestrator | 2026-03-09 01:10:32 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:32.422413 | orchestrator | 2026-03-09 01:10:32 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:32.423247 | orchestrator | 2026-03-09 01:10:32 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:32.424203 | orchestrator | 2026-03-09 01:10:32 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:32.424232 | orchestrator | 2026-03-09 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:35.484164 | orchestrator | 2026-03-09 01:10:35 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:35.484696 | orchestrator | 2026-03-09 01:10:35 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:35.486387 | orchestrator | 2026-03-09 01:10:35 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:35.487144 | orchestrator | 2026-03-09 01:10:35 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:35.487325 | orchestrator | 2026-03-09 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:38.527854 | orchestrator | 2026-03-09 01:10:38 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:38.529501 | orchestrator | 2026-03-09 01:10:38 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:38.530332 | orchestrator | 2026-03-09 01:10:38 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:38.531009 | orchestrator | 2026-03-09 01:10:38 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:38.531051 | orchestrator | 2026-03-09 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:41.567781 | orchestrator | 2026-03-09 01:10:41 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:41.568587 | orchestrator | 2026-03-09 01:10:41 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:41.569541 | orchestrator | 2026-03-09 01:10:41 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:41.570898 | orchestrator | 2026-03-09 01:10:41 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:41.570932 | orchestrator | 2026-03-09 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:44.649480 | orchestrator | 2026-03-09 01:10:44 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:44.665795 | orchestrator | 2026-03-09 01:10:44 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:44.673508 | orchestrator | 2026-03-09 01:10:44 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:44.678933 | orchestrator | 2026-03-09 01:10:44 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:44.679034 | orchestrator | 2026-03-09 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:47.724636 | orchestrator | 2026-03-09 01:10:47 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:47.729731 | orchestrator | 2026-03-09 01:10:47 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:47.730607 | orchestrator | 2026-03-09 01:10:47 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:47.735731 | orchestrator | 2026-03-09 01:10:47 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:47.735818 | orchestrator | 2026-03-09 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:50.776174 | orchestrator | 2026-03-09 01:10:50 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:50.776548 | orchestrator | 2026-03-09 01:10:50 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:50.777255 | orchestrator | 2026-03-09 01:10:50 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:50.778254 | orchestrator | 2026-03-09 01:10:50 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:50.778327 | orchestrator | 2026-03-09 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:53.819317 | orchestrator | 2026-03-09 01:10:53 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:53.819828 | orchestrator | 2026-03-09 01:10:53 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:53.820832 | orchestrator | 2026-03-09 01:10:53 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:53.821489 | orchestrator | 2026-03-09 01:10:53 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:53.821532 | orchestrator | 2026-03-09 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:56.854097 | orchestrator | 2026-03-09 01:10:56 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:56.854565 | orchestrator | 2026-03-09 01:10:56 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:56.857724 | orchestrator | 2026-03-09 01:10:56 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:56.857780 | orchestrator | 2026-03-09 01:10:56 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:56.857793 | orchestrator | 2026-03-09 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:59.907544 | orchestrator | 2026-03-09 01:10:59 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:10:59.907635 | orchestrator | 2026-03-09 01:10:59 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:10:59.907645 | orchestrator | 2026-03-09 01:10:59 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:10:59.907671 | orchestrator | 2026-03-09 01:10:59 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:10:59.907679 | orchestrator | 2026-03-09 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:02.935652 | orchestrator | 2026-03-09 01:11:02 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:11:02.935910 | orchestrator | 2026-03-09 01:11:02 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:02.936758 | orchestrator | 2026-03-09 01:11:02 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:02.937881 | orchestrator | 2026-03-09 01:11:02 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:02.938135 | orchestrator | 2026-03-09 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:05.978949 | orchestrator | 2026-03-09 01:11:05 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:11:05.982898 | orchestrator | 2026-03-09 01:11:05 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:05.983064 | orchestrator | 2026-03-09 01:11:05 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:05.984698 | orchestrator | 2026-03-09 01:11:05 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:05.984802 | orchestrator | 2026-03-09 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:09.029656 | orchestrator | 2026-03-09 01:11:09 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:11:09.030120 | orchestrator | 2026-03-09 01:11:09 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:09.032221 | orchestrator | 2026-03-09 01:11:09 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:09.033130 | orchestrator | 2026-03-09 01:11:09 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:09.033399 | orchestrator | 2026-03-09 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:12.096899 | orchestrator | 2026-03-09 01:11:12 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:11:12.099746 | orchestrator | 2026-03-09 01:11:12 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:12.102659 | orchestrator | 2026-03-09 01:11:12 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:12.103463 | orchestrator | 2026-03-09 01:11:12 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:12.103640 | orchestrator | 2026-03-09 01:11:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:15.152160 | orchestrator | 2026-03-09 01:11:15 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:11:15.152822 | orchestrator | 2026-03-09 01:11:15 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:15.153832 | orchestrator | 2026-03-09 01:11:15 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:15.154994 | orchestrator | 2026-03-09 01:11:15 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:15.155219 | orchestrator | 2026-03-09 01:11:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:18.190602 | orchestrator | 2026-03-09 01:11:18 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:11:18.192106 | orchestrator | 2026-03-09 01:11:18 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:18.193277 | orchestrator | 2026-03-09 01:11:18 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:18.194339 | orchestrator | 2026-03-09 01:11:18 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:18.194376 | orchestrator | 2026-03-09 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:21.233009 | orchestrator | 2026-03-09 01:11:21 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:11:21.235031 | orchestrator | 2026-03-09 01:11:21 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:21.236008 | orchestrator | 2026-03-09 01:11:21 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:21.236931 | orchestrator | 2026-03-09 01:11:21 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:21.237081 | orchestrator | 2026-03-09 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:24.318883 | orchestrator | 2026-03-09 01:11:24 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:11:24.320682 | orchestrator | 2026-03-09 01:11:24 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:24.323194 | orchestrator | 2026-03-09 01:11:24 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:24.324359 | orchestrator | 2026-03-09 01:11:24 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:24.324407 | orchestrator | 2026-03-09 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:27.362203 | orchestrator | 2026-03-09 01:11:27 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:11:27.363997 | orchestrator | 2026-03-09 01:11:27 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:27.364779 | orchestrator | 2026-03-09 01:11:27 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:27.365749 | orchestrator | 2026-03-09 01:11:27 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:27.365920 | orchestrator | 2026-03-09 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:30.466009 | orchestrator | 2026-03-09 01:11:30 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:11:30.466252 | orchestrator | 2026-03-09 01:11:30 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:30.467287 | orchestrator | 2026-03-09 01:11:30 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:30.468510 | orchestrator | 2026-03-09 01:11:30 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:30.468573 | orchestrator | 2026-03-09 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:33.562386 | orchestrator | 2026-03-09 01:11:33 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:11:33.562542 | orchestrator | 2026-03-09 01:11:33 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:33.562557 | orchestrator | 2026-03-09 01:11:33 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:33.562565 | orchestrator | 2026-03-09 01:11:33 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:33.562572 | orchestrator | 2026-03-09 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:36.579117 | orchestrator | 2026-03-09 01:11:36 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:11:36.579313 | orchestrator | 2026-03-09 01:11:36 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:36.580566 | orchestrator | 2026-03-09 01:11:36 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:36.581705 | orchestrator | 2026-03-09 01:11:36 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:36.581780 | orchestrator | 2026-03-09 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:39.646367 | orchestrator | 2026-03-09 01:11:39 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:11:39.646529 | orchestrator | 2026-03-09 01:11:39 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:39.646544 | orchestrator | 2026-03-09 01:11:39 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:39.646551 | orchestrator | 2026-03-09 01:11:39 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:39.646558 | orchestrator | 2026-03-09 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:42.666992 | orchestrator | 2026-03-09 01:11:42 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:11:42.667085 | orchestrator | 2026-03-09 01:11:42 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:42.667614 | orchestrator | 2026-03-09 01:11:42 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:42.668363 | orchestrator | 2026-03-09 01:11:42 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:42.670758 | orchestrator | 2026-03-09 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:45.706317 | orchestrator | 2026-03-09 01:11:45 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state STARTED 2026-03-09 01:11:45.706596 | orchestrator | 2026-03-09 01:11:45 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:45.707591 | orchestrator | 2026-03-09 01:11:45 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:45.709970 | orchestrator | 2026-03-09 01:11:45 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:45.710003 | orchestrator | 2026-03-09 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:48.755320 | orchestrator | 2026-03-09 01:11:48 | INFO  | Task e6c3564a-acff-4363-869a-76dd3e2deea0 is in state SUCCESS 2026-03-09 01:11:48.756256 | orchestrator | 2026-03-09 01:11:48.756297 | orchestrator | 2026-03-09 01:11:48.756313 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:11:48.756337 | orchestrator | 2026-03-09 01:11:48.756355 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:11:48.756370 | orchestrator | Monday 09 March 2026 01:09:20 +0000 (0:00:00.257) 0:00:00.257 ********** 2026-03-09 01:11:48.756386 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:11:48.756400 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:11:48.756414 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:11:48.756427 | orchestrator | 2026-03-09 01:11:48.756495 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:11:48.756512 | orchestrator | Monday 09 March 2026 01:09:20 +0000 (0:00:00.328) 0:00:00.585 ********** 2026-03-09 01:11:48.756526 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-09 01:11:48.756541 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-09 01:11:48.756556 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-09 01:11:48.756639 | orchestrator | 2026-03-09 01:11:48.756654 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-09 01:11:48.756669 | orchestrator | 2026-03-09 01:11:48.756684 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-09 01:11:48.756698 | orchestrator | Monday 09 March 2026 01:09:20 +0000 (0:00:00.398) 0:00:00.983 ********** 2026-03-09 01:11:48.756714 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:11:48.756730 | orchestrator | 2026-03-09 01:11:48.756743 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-09 01:11:48.756758 | orchestrator | Monday 09 March 2026 01:09:21 +0000 (0:00:00.544) 0:00:01.527 ********** 2026-03-09 01:11:48.756772 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-09 01:11:48.756788 | orchestrator | 2026-03-09 01:11:48.756804 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-09 01:11:48.756820 | orchestrator | Monday 09 March 2026 01:09:25 +0000 (0:00:03.850) 0:00:05.378 ********** 2026-03-09 01:11:48.756837 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-09 01:11:48.756853 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-09 01:11:48.756869 | orchestrator | 2026-03-09 01:11:48.756885 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-09 01:11:48.756899 | orchestrator | Monday 09 March 2026 01:09:32 +0000 (0:00:07.106) 0:00:12.484 ********** 2026-03-09 01:11:48.756914 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:11:48.756929 | orchestrator | 2026-03-09 01:11:48.756944 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-09 01:11:48.756959 | orchestrator | Monday 09 March 2026 01:09:35 +0000 (0:00:03.431) 0:00:15.916 ********** 2026-03-09 01:11:48.757008 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:11:48.757026 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-09 01:11:48.757042 | orchestrator | 2026-03-09 01:11:48.757058 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-09 01:11:48.757073 | orchestrator | Monday 09 March 2026 01:09:39 +0000 (0:00:04.085) 0:00:20.002 ********** 2026-03-09 01:11:48.757088 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:11:48.757103 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-09 01:11:48.757119 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-09 01:11:48.757134 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-09 01:11:48.757149 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-09 01:11:48.757164 | orchestrator | 2026-03-09 01:11:48.757179 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-09 01:11:48.757194 | orchestrator | Monday 09 March 2026 01:09:57 +0000 (0:00:17.546) 0:00:37.548 ********** 2026-03-09 01:11:48.757210 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-09 01:11:48.757224 | orchestrator | 2026-03-09 01:11:48.757239 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-09 01:11:48.757254 | orchestrator | Monday 09 March 2026 01:10:01 +0000 (0:00:04.180) 0:00:41.729 ********** 2026-03-09 01:11:48.757290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:11:48.757325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:11:48.757335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:11:48.757356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.757366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.757380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.757397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.757408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.757417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.757432 | orchestrator | 2026-03-09 01:11:48.757578 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-09 01:11:48.757635 | orchestrator | Monday 09 March 2026 01:10:03 +0000 (0:00:02.078) 0:00:43.808 ********** 2026-03-09 01:11:48.757645 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-09 01:11:48.757654 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-09 01:11:48.757662 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-09 01:11:48.757671 | orchestrator | 2026-03-09 01:11:48.757680 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-09 01:11:48.757689 | orchestrator | Monday 09 March 2026 01:10:06 +0000 (0:00:02.330) 0:00:46.139 ********** 2026-03-09 01:11:48.757698 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:11:48.757707 | orchestrator | 2026-03-09 01:11:48.757715 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-09 01:11:48.757724 | orchestrator | Monday 09 March 2026 01:10:06 +0000 (0:00:00.246) 0:00:46.386 ********** 2026-03-09 01:11:48.757732 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:11:48.757741 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:11:48.757750 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:11:48.757758 | orchestrator | 2026-03-09 01:11:48.757767 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-09 01:11:48.757775 | orchestrator | Monday 09 March 2026 01:10:07 +0000 (0:00:00.922) 0:00:47.308 ********** 2026-03-09 01:11:48.757784 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:11:48.757793 | orchestrator | 2026-03-09 01:11:48.757801 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-09 01:11:48.757810 | orchestrator | Monday 09 March 2026 01:10:08 +0000 (0:00:01.231) 0:00:48.539 ********** 2026-03-09 01:11:48.757827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:11:48.757849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:11:48.757868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:11:48.757877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.757887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.757899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.757909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.757932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.757948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.757958 | orchestrator | 2026-03-09 01:11:48.757967 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-09 01:11:48.757976 | orchestrator | Monday 09 March 2026 01:10:12 +0000 (0:00:04.533) 0:00:53.072 ********** 2026-03-09 01:11:48.757985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:11:48.757994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.758007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.758062 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:11:48.758082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:11:48.758098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.758108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.758117 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:11:48.758127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:11:48.758147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.758156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.758171 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:11:48.758180 | orchestrator | 2026-03-09 01:11:48.758226 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-09 01:11:48.758237 | orchestrator | Monday 09 March 2026 01:10:14 +0000 (0:00:01.792) 0:00:54.865 ********** 2026-03-09 01:11:48.758246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:11:48.758256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.758265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.758274 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:11:48.758284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:11:48.758297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.758319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.758329 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:11:48.758338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:11:48.758347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.758357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.758366 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:11:48.758375 | orchestrator | 2026-03-09 01:11:48.758384 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-09 01:11:48.758392 | orchestrator | Monday 09 March 2026 01:10:16 +0000 (0:00:01.704) 0:00:56.569 ********** 2026-03-09 01:11:48.758429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:11:48.758700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:11:48.758797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:11:48.758813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.758827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.758857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.758914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.758937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.758957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.758977 | orchestrator | 2026-03-09 01:11:48.758998 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-09 01:11:48.759021 | orchestrator | Monday 09 March 2026 01:10:22 +0000 (0:00:05.846) 0:01:02.416 ********** 2026-03-09 01:11:48.759040 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:11:48.759054 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:11:48.759065 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:11:48.759077 | orchestrator | 2026-03-09 01:11:48.759088 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-09 01:11:48.759100 | orchestrator | Monday 09 March 2026 01:10:27 +0000 (0:00:05.111) 0:01:07.527 ********** 2026-03-09 01:11:48.759112 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:11:48.759123 | orchestrator | 2026-03-09 01:11:48.759135 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-09 01:11:48.759146 | orchestrator | Monday 09 March 2026 01:10:30 +0000 (0:00:02.558) 0:01:10.086 ********** 2026-03-09 01:11:48.759158 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:11:48.759170 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:11:48.759181 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:11:48.759193 | orchestrator | 2026-03-09 01:11:48.759204 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-09 01:11:48.759216 | orchestrator | Monday 09 March 2026 01:10:32 +0000 (0:00:02.441) 0:01:12.528 ********** 2026-03-09 01:11:48.759229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:11:48.759270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:11:48.759288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:11:48.759303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.759317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.759331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.759361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.759384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.759399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.759413 | orchestrator | 2026-03-09 01:11:48.759427 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-09 01:11:48.759441 | orchestrator | Monday 09 March 2026 01:10:48 +0000 (0:00:16.180) 0:01:28.708 ********** 2026-03-09 01:11:48.759487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:11:48.759501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.759525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.759540 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:11:48.759567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:11:48.759580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.759593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.759606 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:11:48.759618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:11:48.759639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.759671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:11:48.759696 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:11:48.759710 | orchestrator | 2026-03-09 01:11:48.759721 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-09 01:11:48.759734 | orchestrator | Monday 09 March 2026 01:10:51 +0000 (0:00:02.529) 0:01:31.237 ********** 2026-03-09 01:11:48.759756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:11:48.759770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:11:48.759782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:11:48.759804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.759823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.759843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.759856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.759868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.759887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:11:48.759899 | orchestrator | 2026-03-09 01:11:48.759911 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-09 01:11:48.759923 | orchestrator | Monday 09 March 2026 01:10:57 +0000 (0:00:06.199) 0:01:37.437 ********** 2026-03-09 01:11:48.759934 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:11:48.759946 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:11:48.759958 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:11:48.759970 | orchestrator | 2026-03-09 01:11:48.759983 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-09 01:11:48.760003 | orchestrator | Monday 09 March 2026 01:10:58 +0000 (0:00:01.304) 0:01:38.742 ********** 2026-03-09 01:11:48.760024 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:11:48.760043 | orchestrator | 2026-03-09 01:11:48.760064 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-09 01:11:48.760084 | orchestrator | Monday 09 March 2026 01:11:01 +0000 (0:00:02.742) 0:01:41.484 ********** 2026-03-09 01:11:48.760105 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:11:48.760125 | orchestrator | 2026-03-09 01:11:48.760145 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-09 01:11:48.760158 | orchestrator | Monday 09 March 2026 01:11:04 +0000 (0:00:02.798) 0:01:44.283 ********** 2026-03-09 01:11:48.760169 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:11:48.760181 | orchestrator | 2026-03-09 01:11:48.760193 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-09 01:11:48.760205 | orchestrator | Monday 09 March 2026 01:11:17 +0000 (0:00:12.941) 0:01:57.225 ********** 2026-03-09 01:11:48.760217 | orchestrator | 2026-03-09 01:11:48.760228 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-09 01:11:48.760246 | orchestrator | Monday 09 March 2026 01:11:17 +0000 (0:00:00.206) 0:01:57.432 ********** 2026-03-09 01:11:48.760259 | orchestrator | 2026-03-09 01:11:48.760271 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-09 01:11:48.760282 | orchestrator | Monday 09 March 2026 01:11:17 +0000 (0:00:00.138) 0:01:57.570 ********** 2026-03-09 01:11:48.760293 | orchestrator | 2026-03-09 01:11:48.760305 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-09 01:11:48.760316 | orchestrator | Monday 09 March 2026 01:11:17 +0000 (0:00:00.174) 0:01:57.745 ********** 2026-03-09 01:11:48.760328 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:11:48.760340 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:11:48.760351 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:11:48.760361 | orchestrator | 2026-03-09 01:11:48.760373 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-09 01:11:48.760385 | orchestrator | Monday 09 March 2026 01:11:33 +0000 (0:00:15.865) 0:02:13.610 ********** 2026-03-09 01:11:48.760396 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:11:48.760408 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:11:48.760429 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:11:48.760441 | orchestrator | 2026-03-09 01:11:48.760506 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-09 01:11:48.760527 | orchestrator | Monday 09 March 2026 01:11:40 +0000 (0:00:06.472) 0:02:20.082 ********** 2026-03-09 01:11:48.760547 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:11:48.760560 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:11:48.760572 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:11:48.760601 | orchestrator | 2026-03-09 01:11:48.760613 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:11:48.760627 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:11:48.760640 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:11:48.760651 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:11:48.760663 | orchestrator | 2026-03-09 01:11:48.760674 | orchestrator | 2026-03-09 01:11:48.760686 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:11:48.760697 | orchestrator | Monday 09 March 2026 01:11:46 +0000 (0:00:06.313) 0:02:26.396 ********** 2026-03-09 01:11:48.760708 | orchestrator | =============================================================================== 2026-03-09 01:11:48.760720 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.55s 2026-03-09 01:11:48.760731 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 16.18s 2026-03-09 01:11:48.760742 | orchestrator | barbican : Restart barbican-api container ------------------------------ 15.87s 2026-03-09 01:11:48.760753 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.94s 2026-03-09 01:11:48.760766 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.11s 2026-03-09 01:11:48.760777 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.47s 2026-03-09 01:11:48.760788 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.31s 2026-03-09 01:11:48.760799 | orchestrator | barbican : Check barbican containers ------------------------------------ 6.20s 2026-03-09 01:11:48.760811 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.85s 2026-03-09 01:11:48.760823 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 5.11s 2026-03-09 01:11:48.760834 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.53s 2026-03-09 01:11:48.760846 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.18s 2026-03-09 01:11:48.760857 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.09s 2026-03-09 01:11:48.760869 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.85s 2026-03-09 01:11:48.760880 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.43s 2026-03-09 01:11:48.760891 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.80s 2026-03-09 01:11:48.760902 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.74s 2026-03-09 01:11:48.760914 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.56s 2026-03-09 01:11:48.760925 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.53s 2026-03-09 01:11:48.760936 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 2.44s 2026-03-09 01:11:48.760947 | orchestrator | 2026-03-09 01:11:48 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:48.760959 | orchestrator | 2026-03-09 01:11:48 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:11:48.760971 | orchestrator | 2026-03-09 01:11:48 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:48.760988 | orchestrator | 2026-03-09 01:11:48 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:48.761001 | orchestrator | 2026-03-09 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:51.813248 | orchestrator | 2026-03-09 01:11:51 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:51.813638 | orchestrator | 2026-03-09 01:11:51 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:11:51.814592 | orchestrator | 2026-03-09 01:11:51 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:51.815797 | orchestrator | 2026-03-09 01:11:51 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:51.815836 | orchestrator | 2026-03-09 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:54.851193 | orchestrator | 2026-03-09 01:11:54 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:54.851416 | orchestrator | 2026-03-09 01:11:54 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:11:54.855901 | orchestrator | 2026-03-09 01:11:54 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:54.856442 | orchestrator | 2026-03-09 01:11:54 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:54.856496 | orchestrator | 2026-03-09 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:11:57.889907 | orchestrator | 2026-03-09 01:11:57 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:11:57.890703 | orchestrator | 2026-03-09 01:11:57 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:11:57.893404 | orchestrator | 2026-03-09 01:11:57 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:11:57.894848 | orchestrator | 2026-03-09 01:11:57 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:11:57.894944 | orchestrator | 2026-03-09 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:00.926282 | orchestrator | 2026-03-09 01:12:00 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:00.927868 | orchestrator | 2026-03-09 01:12:00 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:00.927937 | orchestrator | 2026-03-09 01:12:00 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:00.928713 | orchestrator | 2026-03-09 01:12:00 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:00.928742 | orchestrator | 2026-03-09 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:03.966962 | orchestrator | 2026-03-09 01:12:03 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:03.967806 | orchestrator | 2026-03-09 01:12:03 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:03.971299 | orchestrator | 2026-03-09 01:12:03 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:03.973072 | orchestrator | 2026-03-09 01:12:03 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:03.973101 | orchestrator | 2026-03-09 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:07.018306 | orchestrator | 2026-03-09 01:12:07 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:07.018956 | orchestrator | 2026-03-09 01:12:07 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:07.020417 | orchestrator | 2026-03-09 01:12:07 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:07.021476 | orchestrator | 2026-03-09 01:12:07 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:07.021558 | orchestrator | 2026-03-09 01:12:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:10.070343 | orchestrator | 2026-03-09 01:12:10 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:10.070890 | orchestrator | 2026-03-09 01:12:10 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:10.071889 | orchestrator | 2026-03-09 01:12:10 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:10.074329 | orchestrator | 2026-03-09 01:12:10 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:10.074446 | orchestrator | 2026-03-09 01:12:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:13.108011 | orchestrator | 2026-03-09 01:12:13 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:13.108774 | orchestrator | 2026-03-09 01:12:13 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:13.111717 | orchestrator | 2026-03-09 01:12:13 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:13.113438 | orchestrator | 2026-03-09 01:12:13 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:13.114896 | orchestrator | 2026-03-09 01:12:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:16.148404 | orchestrator | 2026-03-09 01:12:16 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:16.149026 | orchestrator | 2026-03-09 01:12:16 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:16.149881 | orchestrator | 2026-03-09 01:12:16 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:16.150699 | orchestrator | 2026-03-09 01:12:16 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:16.150762 | orchestrator | 2026-03-09 01:12:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:19.202798 | orchestrator | 2026-03-09 01:12:19 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:19.204199 | orchestrator | 2026-03-09 01:12:19 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:19.204955 | orchestrator | 2026-03-09 01:12:19 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:19.205903 | orchestrator | 2026-03-09 01:12:19 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:19.205998 | orchestrator | 2026-03-09 01:12:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:22.250427 | orchestrator | 2026-03-09 01:12:22 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:22.251598 | orchestrator | 2026-03-09 01:12:22 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:22.251764 | orchestrator | 2026-03-09 01:12:22 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:22.252294 | orchestrator | 2026-03-09 01:12:22 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:22.252630 | orchestrator | 2026-03-09 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:25.299346 | orchestrator | 2026-03-09 01:12:25 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:25.300083 | orchestrator | 2026-03-09 01:12:25 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:25.301153 | orchestrator | 2026-03-09 01:12:25 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:25.302258 | orchestrator | 2026-03-09 01:12:25 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:25.302294 | orchestrator | 2026-03-09 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:28.341612 | orchestrator | 2026-03-09 01:12:28 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:28.342801 | orchestrator | 2026-03-09 01:12:28 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:28.345575 | orchestrator | 2026-03-09 01:12:28 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:28.346702 | orchestrator | 2026-03-09 01:12:28 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:28.346739 | orchestrator | 2026-03-09 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:31.435546 | orchestrator | 2026-03-09 01:12:31 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:31.438323 | orchestrator | 2026-03-09 01:12:31 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:31.441554 | orchestrator | 2026-03-09 01:12:31 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:31.444243 | orchestrator | 2026-03-09 01:12:31 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:31.444312 | orchestrator | 2026-03-09 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:34.504959 | orchestrator | 2026-03-09 01:12:34 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:34.506947 | orchestrator | 2026-03-09 01:12:34 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:34.515474 | orchestrator | 2026-03-09 01:12:34 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:34.518883 | orchestrator | 2026-03-09 01:12:34 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:34.519236 | orchestrator | 2026-03-09 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:37.555300 | orchestrator | 2026-03-09 01:12:37 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:37.557301 | orchestrator | 2026-03-09 01:12:37 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:37.558097 | orchestrator | 2026-03-09 01:12:37 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:37.559389 | orchestrator | 2026-03-09 01:12:37 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:37.559522 | orchestrator | 2026-03-09 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:40.612260 | orchestrator | 2026-03-09 01:12:40 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:40.614009 | orchestrator | 2026-03-09 01:12:40 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:40.616662 | orchestrator | 2026-03-09 01:12:40 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:40.618323 | orchestrator | 2026-03-09 01:12:40 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:40.618356 | orchestrator | 2026-03-09 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:43.658949 | orchestrator | 2026-03-09 01:12:43 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:43.660113 | orchestrator | 2026-03-09 01:12:43 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:43.661011 | orchestrator | 2026-03-09 01:12:43 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:43.662809 | orchestrator | 2026-03-09 01:12:43 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:43.662891 | orchestrator | 2026-03-09 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:46.704224 | orchestrator | 2026-03-09 01:12:46 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:46.705543 | orchestrator | 2026-03-09 01:12:46 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:46.706577 | orchestrator | 2026-03-09 01:12:46 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:46.707752 | orchestrator | 2026-03-09 01:12:46 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:46.707805 | orchestrator | 2026-03-09 01:12:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:49.739921 | orchestrator | 2026-03-09 01:12:49 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:49.740139 | orchestrator | 2026-03-09 01:12:49 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:49.741867 | orchestrator | 2026-03-09 01:12:49 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:49.743167 | orchestrator | 2026-03-09 01:12:49 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:49.743198 | orchestrator | 2026-03-09 01:12:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:52.792568 | orchestrator | 2026-03-09 01:12:52 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:52.794360 | orchestrator | 2026-03-09 01:12:52 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:52.794973 | orchestrator | 2026-03-09 01:12:52 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:52.796122 | orchestrator | 2026-03-09 01:12:52 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:52.796194 | orchestrator | 2026-03-09 01:12:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:55.854417 | orchestrator | 2026-03-09 01:12:55 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:55.855117 | orchestrator | 2026-03-09 01:12:55 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:55.856891 | orchestrator | 2026-03-09 01:12:55 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:55.857527 | orchestrator | 2026-03-09 01:12:55 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:55.857610 | orchestrator | 2026-03-09 01:12:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:58.894079 | orchestrator | 2026-03-09 01:12:58 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:12:58.896857 | orchestrator | 2026-03-09 01:12:58 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:12:58.898745 | orchestrator | 2026-03-09 01:12:58 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:12:58.900661 | orchestrator | 2026-03-09 01:12:58 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:12:58.901437 | orchestrator | 2026-03-09 01:12:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:01.938886 | orchestrator | 2026-03-09 01:13:01 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:01.939669 | orchestrator | 2026-03-09 01:13:01 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:13:01.941434 | orchestrator | 2026-03-09 01:13:01 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:01.942612 | orchestrator | 2026-03-09 01:13:01 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state STARTED 2026-03-09 01:13:01.942650 | orchestrator | 2026-03-09 01:13:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:04.970109 | orchestrator | 2026-03-09 01:13:04 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:04.971155 | orchestrator | 2026-03-09 01:13:04 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:13:04.972119 | orchestrator | 2026-03-09 01:13:04 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:04.976150 | orchestrator | 2026-03-09 01:13:04 | INFO  | Task 24fed854-5b88-4949-ac9b-e0896d51b11c is in state SUCCESS 2026-03-09 01:13:04.978091 | orchestrator | 2026-03-09 01:13:04.978225 | orchestrator | 2026-03-09 01:13:04.978242 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:13:04.978252 | orchestrator | 2026-03-09 01:13:04.978280 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:13:04.978289 | orchestrator | Monday 09 March 2026 01:09:21 +0000 (0:00:00.250) 0:00:00.250 ********** 2026-03-09 01:13:04.978297 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:13:04.978306 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:13:04.978314 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:13:04.978321 | orchestrator | 2026-03-09 01:13:04.978328 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:13:04.978336 | orchestrator | Monday 09 March 2026 01:09:21 +0000 (0:00:00.295) 0:00:00.546 ********** 2026-03-09 01:13:04.978344 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-09 01:13:04.978352 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-09 01:13:04.978395 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-09 01:13:04.978409 | orchestrator | 2026-03-09 01:13:04.978421 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-09 01:13:04.978459 | orchestrator | 2026-03-09 01:13:04.978693 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-09 01:13:04.978709 | orchestrator | Monday 09 March 2026 01:09:21 +0000 (0:00:00.387) 0:00:00.933 ********** 2026-03-09 01:13:04.978724 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:13:04.978740 | orchestrator | 2026-03-09 01:13:04.978753 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-09 01:13:04.978766 | orchestrator | Monday 09 March 2026 01:09:22 +0000 (0:00:00.573) 0:00:01.506 ********** 2026-03-09 01:13:04.978781 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-09 01:13:04.978794 | orchestrator | 2026-03-09 01:13:04.978808 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-09 01:13:04.978851 | orchestrator | Monday 09 March 2026 01:09:26 +0000 (0:00:03.803) 0:00:05.309 ********** 2026-03-09 01:13:04.978889 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-09 01:13:04.978905 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-09 01:13:04.978918 | orchestrator | 2026-03-09 01:13:04.978930 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-09 01:13:04.979032 | orchestrator | Monday 09 March 2026 01:09:33 +0000 (0:00:07.363) 0:00:12.673 ********** 2026-03-09 01:13:04.979044 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:13:04.979051 | orchestrator | 2026-03-09 01:13:04.979059 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-09 01:13:04.979066 | orchestrator | Monday 09 March 2026 01:09:36 +0000 (0:00:03.397) 0:00:16.070 ********** 2026-03-09 01:13:04.979074 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:13:04.979170 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-09 01:13:04.979239 | orchestrator | 2026-03-09 01:13:04.979252 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-09 01:13:04.979266 | orchestrator | Monday 09 March 2026 01:09:41 +0000 (0:00:04.193) 0:00:20.264 ********** 2026-03-09 01:13:04.979278 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:13:04.979291 | orchestrator | 2026-03-09 01:13:04.979305 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-09 01:13:04.979317 | orchestrator | Monday 09 March 2026 01:09:44 +0000 (0:00:03.685) 0:00:23.949 ********** 2026-03-09 01:13:04.979330 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-09 01:13:04.979342 | orchestrator | 2026-03-09 01:13:04.979355 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-09 01:13:04.979368 | orchestrator | Monday 09 March 2026 01:09:49 +0000 (0:00:04.992) 0:00:28.942 ********** 2026-03-09 01:13:04.979384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:13:04.979424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:13:04.979440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:13:04.979486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979700 | orchestrator | 2026-03-09 01:13:04.979708 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-09 01:13:04.979715 | orchestrator | Monday 09 March 2026 01:09:53 +0000 (0:00:03.969) 0:00:32.912 ********** 2026-03-09 01:13:04.979723 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:13:04.979730 | orchestrator | 2026-03-09 01:13:04.979738 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-09 01:13:04.979746 | orchestrator | Monday 09 March 2026 01:09:53 +0000 (0:00:00.126) 0:00:33.039 ********** 2026-03-09 01:13:04.979753 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:13:04.979764 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:13:04.979777 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:13:04.979790 | orchestrator | 2026-03-09 01:13:04.979802 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-09 01:13:04.979814 | orchestrator | Monday 09 March 2026 01:09:54 +0000 (0:00:00.382) 0:00:33.421 ********** 2026-03-09 01:13:04.979822 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:13:04.979830 | orchestrator | 2026-03-09 01:13:04.979838 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-09 01:13:04.979845 | orchestrator | Monday 09 March 2026 01:09:55 +0000 (0:00:00.723) 0:00:34.144 ********** 2026-03-09 01:13:04.979861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:13:04.979882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:13:04.979893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:13:04.979912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.979992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.980000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.980017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.980026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.980034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.980045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.980054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.980061 | orchestrator | 2026-03-09 01:13:04.980069 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-09 01:13:04.980077 | orchestrator | Monday 09 March 2026 01:10:02 +0000 (0:00:07.267) 0:00:41.411 ********** 2026-03-09 01:13:04.980085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:13:04.980108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:13:04.980116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.980124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.980135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.980144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.980152 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:13:04.980160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:13:04.980947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:13:04.981003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981064 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:13:04.981073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:13:04.981103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:13:04.981112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981147 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:13:04.981155 | orchestrator | 2026-03-09 01:13:04.981163 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-09 01:13:04.981188 | orchestrator | Monday 09 March 2026 01:10:03 +0000 (0:00:00.818) 0:00:42.230 ********** 2026-03-09 01:13:04.981196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:13:04.981208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:13:04.981216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981274 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:13:04.981282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:13:04.981296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:13:04.981304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:13:04.981323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:13:04.981345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981393 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:13:04.981401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.981416 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:13:04.981424 | orchestrator | 2026-03-09 01:13:04.981431 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-09 01:13:04.981439 | orchestrator | Monday 09 March 2026 01:10:06 +0000 (0:00:02.972) 0:00:45.202 ********** 2026-03-09 01:13:04.981446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:13:04.981459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:13:04.981511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:13:04.981524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981679 | orchestrator | 2026-03-09 01:13:04.981686 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-09 01:13:04.981694 | orchestrator | Monday 09 March 2026 01:10:13 +0000 (0:00:07.680) 0:00:52.883 ********** 2026-03-09 01:13:04.981701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:13:04.981719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:13:04.981735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:13:04.981754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.981987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.982001 | orchestrator | 2026-03-09 01:13:04.982014 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-09 01:13:04.982082 | orchestrator | Monday 09 March 2026 01:10:48 +0000 (0:00:34.831) 0:01:27.714 ********** 2026-03-09 01:13:04.982093 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-09 01:13:04.982101 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-09 01:13:04.982109 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-09 01:13:04.982116 | orchestrator | 2026-03-09 01:13:04.982123 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-09 01:13:04.982131 | orchestrator | Monday 09 March 2026 01:11:00 +0000 (0:00:11.592) 0:01:39.307 ********** 2026-03-09 01:13:04.982138 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-09 01:13:04.982145 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-09 01:13:04.982152 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-09 01:13:04.982159 | orchestrator | 2026-03-09 01:13:04.982166 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-09 01:13:04.982174 | orchestrator | Monday 09 March 2026 01:11:04 +0000 (0:00:04.547) 0:01:43.854 ********** 2026-03-09 01:13:04.982212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:13:04.982221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:13:04.982241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:13:04.982249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.982256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.982300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.982340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.982409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.982422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.982434 | orchestrator | 2026-03-09 01:13:04.982448 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-09 01:13:04.982479 | orchestrator | Monday 09 March 2026 01:11:09 +0000 (0:00:04.383) 0:01:48.238 ********** 2026-03-09 01:13:04.982503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:13:04.982524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:13:04.982538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:13:04.982556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.982569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.982637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.982683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.982723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.982731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.982738 | orchestrator | 2026-03-09 01:13:04.982746 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-09 01:13:04.982753 | orchestrator | Monday 09 March 2026 01:11:12 +0000 (0:00:03.785) 0:01:52.024 ********** 2026-03-09 01:13:04.982761 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:13:04.982769 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:13:04.982776 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:13:04.982783 | orchestrator | 2026-03-09 01:13:04.982790 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-09 01:13:04.982803 | orchestrator | Monday 09 March 2026 01:11:13 +0000 (0:00:00.918) 0:01:52.942 ********** 2026-03-09 01:13:04.982816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:13:04.982824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:13:04.982832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982890 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:13:04.982911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:13:04.982925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:13:04.982933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.982975 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:13:04.982995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:13:04.983009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:13:04.983066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.983081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.983098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.983111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:13:04.983134 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:13:04.983147 | orchestrator | 2026-03-09 01:13:04.983160 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-09 01:13:04.983173 | orchestrator | Monday 09 March 2026 01:11:14 +0000 (0:00:01.104) 0:01:54.047 ********** 2026-03-09 01:13:04.983195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:13:04.983207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:13:04.983216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:13:04.983228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.983247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.983255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:13:04.983268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.983276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.983284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.983295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.983309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.983316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.983329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.983337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.983345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.983353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.983365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.983377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:13:04.983385 | orchestrator | 2026-03-09 01:13:04.983393 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-09 01:13:04.983400 | orchestrator | Monday 09 March 2026 01:11:21 +0000 (0:00:06.827) 0:02:00.879 ********** 2026-03-09 01:13:04.983408 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:13:04.983416 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:13:04.983423 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:13:04.983430 | orchestrator | 2026-03-09 01:13:04.983438 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-09 01:13:04.983445 | orchestrator | Monday 09 March 2026 01:11:22 +0000 (0:00:01.089) 0:02:01.968 ********** 2026-03-09 01:13:04.983453 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-09 01:13:04.983490 | orchestrator | 2026-03-09 01:13:04.983500 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-09 01:13:04.983508 | orchestrator | Monday 09 March 2026 01:11:25 +0000 (0:00:02.759) 0:02:04.728 ********** 2026-03-09 01:13:04.983516 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 01:13:04.983524 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-09 01:13:04.983531 | orchestrator | 2026-03-09 01:13:04.983539 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-09 01:13:04.983553 | orchestrator | Monday 09 March 2026 01:11:28 +0000 (0:00:02.593) 0:02:07.321 ********** 2026-03-09 01:13:04.983562 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:13:04.983575 | orchestrator | 2026-03-09 01:13:04.983587 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-09 01:13:04.983600 | orchestrator | Monday 09 March 2026 01:11:44 +0000 (0:00:16.741) 0:02:24.063 ********** 2026-03-09 01:13:04.983612 | orchestrator | 2026-03-09 01:13:04.983623 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-09 01:13:04.983636 | orchestrator | Monday 09 March 2026 01:11:45 +0000 (0:00:00.071) 0:02:24.134 ********** 2026-03-09 01:13:04.983649 | orchestrator | 2026-03-09 01:13:04.983661 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-09 01:13:04.983672 | orchestrator | Monday 09 March 2026 01:11:45 +0000 (0:00:00.095) 0:02:24.230 ********** 2026-03-09 01:13:04.983684 | orchestrator | 2026-03-09 01:13:04.983697 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-09 01:13:04.983709 | orchestrator | Monday 09 March 2026 01:11:45 +0000 (0:00:00.160) 0:02:24.391 ********** 2026-03-09 01:13:04.983721 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:13:04.983734 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:13:04.983745 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:13:04.983757 | orchestrator | 2026-03-09 01:13:04.983765 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-09 01:13:04.983772 | orchestrator | Monday 09 March 2026 01:12:02 +0000 (0:00:17.655) 0:02:42.046 ********** 2026-03-09 01:13:04.983780 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:13:04.983787 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:13:04.983802 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:13:04.983813 | orchestrator | 2026-03-09 01:13:04.983826 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-09 01:13:04.983844 | orchestrator | Monday 09 March 2026 01:12:13 +0000 (0:00:11.011) 0:02:53.057 ********** 2026-03-09 01:13:04.983858 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:13:04.983870 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:13:04.983881 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:13:04.983892 | orchestrator | 2026-03-09 01:13:04.983904 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-09 01:13:04.983917 | orchestrator | Monday 09 March 2026 01:12:25 +0000 (0:00:11.845) 0:03:04.903 ********** 2026-03-09 01:13:04.983930 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:13:04.983943 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:13:04.983955 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:13:04.983965 | orchestrator | 2026-03-09 01:13:04.983972 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-09 01:13:04.983980 | orchestrator | Monday 09 March 2026 01:12:34 +0000 (0:00:08.765) 0:03:13.669 ********** 2026-03-09 01:13:04.983987 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:13:04.983994 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:13:04.984002 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:13:04.984009 | orchestrator | 2026-03-09 01:13:04.984016 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-09 01:13:04.984024 | orchestrator | Monday 09 March 2026 01:12:47 +0000 (0:00:12.580) 0:03:26.249 ********** 2026-03-09 01:13:04.984036 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:13:04.984044 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:13:04.984051 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:13:04.984058 | orchestrator | 2026-03-09 01:13:04.984065 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-09 01:13:04.984073 | orchestrator | Monday 09 March 2026 01:12:55 +0000 (0:00:08.202) 0:03:34.452 ********** 2026-03-09 01:13:04.984080 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:13:04.984087 | orchestrator | 2026-03-09 01:13:04.984095 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:13:04.984103 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:13:04.984112 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:13:04.984119 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:13:04.984127 | orchestrator | 2026-03-09 01:13:04.984134 | orchestrator | 2026-03-09 01:13:04.984141 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:13:04.984149 | orchestrator | Monday 09 March 2026 01:13:03 +0000 (0:00:08.427) 0:03:42.880 ********** 2026-03-09 01:13:04.984160 | orchestrator | =============================================================================== 2026-03-09 01:13:04.984177 | orchestrator | designate : Copying over designate.conf -------------------------------- 34.83s 2026-03-09 01:13:04.984192 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 17.66s 2026-03-09 01:13:04.984205 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.74s 2026-03-09 01:13:04.984217 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.58s 2026-03-09 01:13:04.984227 | orchestrator | designate : Restart designate-central container ------------------------ 11.85s 2026-03-09 01:13:04.984238 | orchestrator | designate : Copying over pools.yaml ------------------------------------ 11.59s 2026-03-09 01:13:04.984250 | orchestrator | designate : Restart designate-api container ---------------------------- 11.01s 2026-03-09 01:13:04.984261 | orchestrator | designate : Restart designate-producer container ------------------------ 8.77s 2026-03-09 01:13:04.984283 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.43s 2026-03-09 01:13:04.984295 | orchestrator | designate : Restart designate-worker container -------------------------- 8.20s 2026-03-09 01:13:04.984318 | orchestrator | designate : Copying over config.json files for services ----------------- 7.68s 2026-03-09 01:13:04.984332 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.36s 2026-03-09 01:13:04.984344 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.27s 2026-03-09 01:13:04.984357 | orchestrator | designate : Check designate containers ---------------------------------- 6.83s 2026-03-09 01:13:04.984369 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.99s 2026-03-09 01:13:04.984381 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.55s 2026-03-09 01:13:04.984392 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.38s 2026-03-09 01:13:04.984399 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.19s 2026-03-09 01:13:04.984406 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.97s 2026-03-09 01:13:04.984414 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.80s 2026-03-09 01:13:04.984421 | orchestrator | 2026-03-09 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:08.019013 | orchestrator | 2026-03-09 01:13:08 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:08.031925 | orchestrator | 2026-03-09 01:13:08 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:13:08.040691 | orchestrator | 2026-03-09 01:13:08 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:08.041074 | orchestrator | 2026-03-09 01:13:08 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:08.041383 | orchestrator | 2026-03-09 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:11.075438 | orchestrator | 2026-03-09 01:13:11 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:11.077168 | orchestrator | 2026-03-09 01:13:11 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:13:11.077221 | orchestrator | 2026-03-09 01:13:11 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:11.079112 | orchestrator | 2026-03-09 01:13:11 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:11.079147 | orchestrator | 2026-03-09 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:14.119289 | orchestrator | 2026-03-09 01:13:14 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:14.119388 | orchestrator | 2026-03-09 01:13:14 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:13:14.119905 | orchestrator | 2026-03-09 01:13:14 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:14.120672 | orchestrator | 2026-03-09 01:13:14 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:14.120696 | orchestrator | 2026-03-09 01:13:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:17.147072 | orchestrator | 2026-03-09 01:13:17 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:17.148824 | orchestrator | 2026-03-09 01:13:17 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:13:17.148873 | orchestrator | 2026-03-09 01:13:17 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:17.148921 | orchestrator | 2026-03-09 01:13:17 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:17.148936 | orchestrator | 2026-03-09 01:13:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:20.206294 | orchestrator | 2026-03-09 01:13:20 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:20.207672 | orchestrator | 2026-03-09 01:13:20 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:13:20.209678 | orchestrator | 2026-03-09 01:13:20 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:20.210273 | orchestrator | 2026-03-09 01:13:20 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:20.210438 | orchestrator | 2026-03-09 01:13:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:23.256894 | orchestrator | 2026-03-09 01:13:23 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:23.256995 | orchestrator | 2026-03-09 01:13:23 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:13:23.257006 | orchestrator | 2026-03-09 01:13:23 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:23.257193 | orchestrator | 2026-03-09 01:13:23 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:23.257370 | orchestrator | 2026-03-09 01:13:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:26.289362 | orchestrator | 2026-03-09 01:13:26 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:26.289948 | orchestrator | 2026-03-09 01:13:26 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state STARTED 2026-03-09 01:13:26.290926 | orchestrator | 2026-03-09 01:13:26 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:26.292120 | orchestrator | 2026-03-09 01:13:26 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:26.292428 | orchestrator | 2026-03-09 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:29.323581 | orchestrator | 2026-03-09 01:13:29 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:29.323909 | orchestrator | 2026-03-09 01:13:29 | INFO  | Task 67960ad4-6c13-48bc-9af3-64b40bfb5238 is in state SUCCESS 2026-03-09 01:13:29.325193 | orchestrator | 2026-03-09 01:13:29 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:29.326008 | orchestrator | 2026-03-09 01:13:29 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:13:29.327704 | orchestrator | 2026-03-09 01:13:29 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:29.327724 | orchestrator | 2026-03-09 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:32.363088 | orchestrator | 2026-03-09 01:13:32 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:32.364166 | orchestrator | 2026-03-09 01:13:32 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:32.364841 | orchestrator | 2026-03-09 01:13:32 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:13:32.366557 | orchestrator | 2026-03-09 01:13:32 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:32.366587 | orchestrator | 2026-03-09 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:35.410258 | orchestrator | 2026-03-09 01:13:35 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:35.410389 | orchestrator | 2026-03-09 01:13:35 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:35.411185 | orchestrator | 2026-03-09 01:13:35 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:13:35.413234 | orchestrator | 2026-03-09 01:13:35 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:35.413667 | orchestrator | 2026-03-09 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:38.450257 | orchestrator | 2026-03-09 01:13:38 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:38.452085 | orchestrator | 2026-03-09 01:13:38 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:38.454370 | orchestrator | 2026-03-09 01:13:38 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:13:38.457026 | orchestrator | 2026-03-09 01:13:38 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:38.457592 | orchestrator | 2026-03-09 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:41.495650 | orchestrator | 2026-03-09 01:13:41 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:41.496112 | orchestrator | 2026-03-09 01:13:41 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:41.497088 | orchestrator | 2026-03-09 01:13:41 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:13:41.497810 | orchestrator | 2026-03-09 01:13:41 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:41.497847 | orchestrator | 2026-03-09 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:44.534003 | orchestrator | 2026-03-09 01:13:44 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:44.539951 | orchestrator | 2026-03-09 01:13:44 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:44.541316 | orchestrator | 2026-03-09 01:13:44 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:13:44.543454 | orchestrator | 2026-03-09 01:13:44 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:44.543602 | orchestrator | 2026-03-09 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:47.571823 | orchestrator | 2026-03-09 01:13:47 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:47.572201 | orchestrator | 2026-03-09 01:13:47 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:47.573594 | orchestrator | 2026-03-09 01:13:47 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:13:47.575319 | orchestrator | 2026-03-09 01:13:47 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:47.575827 | orchestrator | 2026-03-09 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:50.624435 | orchestrator | 2026-03-09 01:13:50 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:50.625123 | orchestrator | 2026-03-09 01:13:50 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:50.626346 | orchestrator | 2026-03-09 01:13:50 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:13:50.627568 | orchestrator | 2026-03-09 01:13:50 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:50.627636 | orchestrator | 2026-03-09 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:53.661025 | orchestrator | 2026-03-09 01:13:53 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:53.661668 | orchestrator | 2026-03-09 01:13:53 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:53.662390 | orchestrator | 2026-03-09 01:13:53 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:13:53.663668 | orchestrator | 2026-03-09 01:13:53 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:53.663699 | orchestrator | 2026-03-09 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:56.691322 | orchestrator | 2026-03-09 01:13:56 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:56.691664 | orchestrator | 2026-03-09 01:13:56 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:56.692672 | orchestrator | 2026-03-09 01:13:56 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:13:56.694090 | orchestrator | 2026-03-09 01:13:56 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:56.694109 | orchestrator | 2026-03-09 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:59.727452 | orchestrator | 2026-03-09 01:13:59 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:13:59.729796 | orchestrator | 2026-03-09 01:13:59 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:13:59.729892 | orchestrator | 2026-03-09 01:13:59 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:13:59.731006 | orchestrator | 2026-03-09 01:13:59 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:13:59.731068 | orchestrator | 2026-03-09 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:02.769980 | orchestrator | 2026-03-09 01:14:02 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:14:02.771266 | orchestrator | 2026-03-09 01:14:02 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:14:02.773723 | orchestrator | 2026-03-09 01:14:02 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:14:02.776446 | orchestrator | 2026-03-09 01:14:02 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:14:02.776516 | orchestrator | 2026-03-09 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:05.824095 | orchestrator | 2026-03-09 01:14:05 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:14:05.824240 | orchestrator | 2026-03-09 01:14:05 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:14:05.825255 | orchestrator | 2026-03-09 01:14:05 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:14:05.825912 | orchestrator | 2026-03-09 01:14:05 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:14:05.826432 | orchestrator | 2026-03-09 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:08.865939 | orchestrator | 2026-03-09 01:14:08 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:14:08.866293 | orchestrator | 2026-03-09 01:14:08 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:14:08.867084 | orchestrator | 2026-03-09 01:14:08 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:14:08.869065 | orchestrator | 2026-03-09 01:14:08 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:14:08.869125 | orchestrator | 2026-03-09 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:11.913110 | orchestrator | 2026-03-09 01:14:11 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:14:11.914449 | orchestrator | 2026-03-09 01:14:11 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:14:11.915703 | orchestrator | 2026-03-09 01:14:11 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:14:11.917003 | orchestrator | 2026-03-09 01:14:11 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:14:11.917135 | orchestrator | 2026-03-09 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:14.970695 | orchestrator | 2026-03-09 01:14:14 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:14:14.972711 | orchestrator | 2026-03-09 01:14:14 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:14:14.975008 | orchestrator | 2026-03-09 01:14:14 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:14:14.977560 | orchestrator | 2026-03-09 01:14:14 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:14:14.977689 | orchestrator | 2026-03-09 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:18.024778 | orchestrator | 2026-03-09 01:14:18 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:14:18.026216 | orchestrator | 2026-03-09 01:14:18 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:14:18.026711 | orchestrator | 2026-03-09 01:14:18 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:14:18.027868 | orchestrator | 2026-03-09 01:14:18 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:14:18.027929 | orchestrator | 2026-03-09 01:14:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:21.070641 | orchestrator | 2026-03-09 01:14:21 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:14:21.073166 | orchestrator | 2026-03-09 01:14:21 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:14:21.074125 | orchestrator | 2026-03-09 01:14:21 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state STARTED 2026-03-09 01:14:21.076220 | orchestrator | 2026-03-09 01:14:21 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state STARTED 2026-03-09 01:14:21.076625 | orchestrator | 2026-03-09 01:14:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:24.119230 | orchestrator | 2026-03-09 01:14:24 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state STARTED 2026-03-09 01:16:24.228030 | orchestrator | 2026-03-09 01:16:24 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:16:24.228127 | orchestrator | 2026-03-09 01:16:24 | INFO  | Task 30816e17-403b-4b38-a5ca-4f03df5ca3ba is in state SUCCESS 2026-03-09 01:16:24.232179 | orchestrator | 2026-03-09 01:16:24.232254 | orchestrator | 2026-03-09 01:16:24.232260 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-09 01:16:24.232266 | orchestrator | 2026-03-09 01:16:24.232271 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-09 01:16:24.232275 | orchestrator | Monday 09 March 2026 01:11:58 +0000 (0:00:00.281) 0:00:00.281 ********** 2026-03-09 01:16:24.232299 | orchestrator | changed: [localhost] 2026-03-09 01:16:24.232305 | orchestrator | 2026-03-09 01:16:24.232309 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-09 01:16:24.232313 | orchestrator | Monday 09 March 2026 01:12:00 +0000 (0:00:01.130) 0:00:01.412 ********** 2026-03-09 01:16:24.232317 | orchestrator | changed: [localhost] 2026-03-09 01:16:24.232320 | orchestrator | 2026-03-09 01:16:24.232325 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-09 01:16:24.232328 | orchestrator | Monday 09 March 2026 01:12:34 +0000 (0:00:34.025) 0:00:35.438 ********** 2026-03-09 01:16:24.232333 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-03-09 01:16:24.232337 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2026-03-09 01:16:24.232341 | orchestrator | changed: [localhost] 2026-03-09 01:16:24.232353 | orchestrator | 2026-03-09 01:16:24.232357 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:16:24.232361 | orchestrator | 2026-03-09 01:16:24.232364 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:16:24.232370 | orchestrator | Monday 09 March 2026 01:13:24 +0000 (0:00:50.343) 0:01:25.781 ********** 2026-03-09 01:16:24.232375 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:24.232381 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:16:24.232389 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:16:24.232398 | orchestrator | 2026-03-09 01:16:24.232419 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:16:24.232426 | orchestrator | Monday 09 March 2026 01:13:25 +0000 (0:00:00.644) 0:01:26.426 ********** 2026-03-09 01:16:24.232433 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-09 01:16:24.232439 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-09 01:16:24.232446 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-09 01:16:24.232452 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-09 01:16:24.232458 | orchestrator | 2026-03-09 01:16:24.232464 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-09 01:16:24.232472 | orchestrator | skipping: no hosts matched 2026-03-09 01:16:24.232480 | orchestrator | 2026-03-09 01:16:24.232487 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:16:24.232494 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:16:24.232504 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:16:24.232514 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:16:24.232519 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:16:24.232526 | orchestrator | 2026-03-09 01:16:24.232533 | orchestrator | 2026-03-09 01:16:24.232539 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:16:24.232545 | orchestrator | Monday 09 March 2026 01:13:26 +0000 (0:00:01.692) 0:01:28.118 ********** 2026-03-09 01:16:24.232551 | orchestrator | =============================================================================== 2026-03-09 01:16:24.232557 | orchestrator | Download ironic-agent kernel ------------------------------------------- 50.34s 2026-03-09 01:16:24.232563 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 34.03s 2026-03-09 01:16:24.232569 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.69s 2026-03-09 01:16:24.232680 | orchestrator | Ensure the destination directory exists --------------------------------- 1.13s 2026-03-09 01:16:24.232688 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.64s 2026-03-09 01:16:24.232704 | orchestrator | 2026-03-09 01:16:24.232712 | orchestrator | 2026-03-09 01:16:24.232718 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:16:24.232723 | orchestrator | 2026-03-09 01:16:24.232730 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:16:24.232736 | orchestrator | Monday 09 March 2026 01:13:35 +0000 (0:00:00.448) 0:00:00.448 ********** 2026-03-09 01:16:24.232742 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:24.232749 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:16:24.232755 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:16:24.232761 | orchestrator | 2026-03-09 01:16:24.232768 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:16:24.232775 | orchestrator | Monday 09 March 2026 01:13:36 +0000 (0:00:00.559) 0:00:01.007 ********** 2026-03-09 01:16:24.232782 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-09 01:16:24.232789 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-09 01:16:24.232796 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-09 01:16:24.232803 | orchestrator | 2026-03-09 01:16:24.232808 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-09 01:16:24.232813 | orchestrator | 2026-03-09 01:16:24.232818 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-09 01:16:24.232861 | orchestrator | Monday 09 March 2026 01:13:36 +0000 (0:00:00.530) 0:00:01.537 ********** 2026-03-09 01:16:24.232866 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:16:24.232871 | orchestrator | 2026-03-09 01:16:24.232890 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-09 01:16:24.232895 | orchestrator | Monday 09 March 2026 01:13:37 +0000 (0:00:00.635) 0:00:02.173 ********** 2026-03-09 01:16:24.232900 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-09 01:16:24.232905 | orchestrator | 2026-03-09 01:16:24.232910 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-09 01:16:24.232914 | orchestrator | Monday 09 March 2026 01:13:41 +0000 (0:00:04.041) 0:00:06.215 ********** 2026-03-09 01:16:24.232919 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-09 01:16:24.232924 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-09 01:16:24.232928 | orchestrator | 2026-03-09 01:16:24.232933 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-09 01:16:24.232937 | orchestrator | Monday 09 March 2026 01:13:49 +0000 (0:00:08.168) 0:00:14.383 ********** 2026-03-09 01:16:24.232942 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:16:24.232946 | orchestrator | 2026-03-09 01:16:24.232951 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-09 01:16:24.232955 | orchestrator | Monday 09 March 2026 01:13:53 +0000 (0:00:04.505) 0:00:18.889 ********** 2026-03-09 01:16:24.232960 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:16:24.232965 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-09 01:16:24.232969 | orchestrator | 2026-03-09 01:16:24.232973 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-09 01:16:24.232977 | orchestrator | Monday 09 March 2026 01:13:58 +0000 (0:00:05.021) 0:00:23.910 ********** 2026-03-09 01:16:24.232981 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:16:24.232986 | orchestrator | 2026-03-09 01:16:24.232990 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-09 01:16:24.232994 | orchestrator | Monday 09 March 2026 01:14:03 +0000 (0:00:04.264) 0:00:28.175 ********** 2026-03-09 01:16:24.232999 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-09 01:16:24.233003 | orchestrator | 2026-03-09 01:16:24.233013 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-09 01:16:24.233017 | orchestrator | Monday 09 March 2026 01:14:07 +0000 (0:00:04.170) 0:00:32.345 ********** 2026-03-09 01:16:24.233022 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:24.233026 | orchestrator | 2026-03-09 01:16:24.233031 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-09 01:16:24.233035 | orchestrator | Monday 09 March 2026 01:14:10 +0000 (0:00:03.229) 0:00:35.575 ********** 2026-03-09 01:16:24.233040 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:24.233046 | orchestrator | 2026-03-09 01:16:24.233052 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-09 01:16:24.233057 | orchestrator | Monday 09 March 2026 01:14:14 +0000 (0:00:03.659) 0:00:39.235 ********** 2026-03-09 01:16:24.233062 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:24.233068 | orchestrator | 2026-03-09 01:16:24.233077 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-09 01:16:24.233084 | orchestrator | Monday 09 March 2026 01:14:17 +0000 (0:00:03.384) 0:00:42.620 ********** 2026-03-09 01:16:24.233094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.233111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.233118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.233125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.233139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.233145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.233151 | orchestrator | 2026-03-09 01:16:24.233157 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-09 01:16:24.233163 | orchestrator | Monday 09 March 2026 01:14:19 +0000 (0:00:01.531) 0:00:44.151 ********** 2026-03-09 01:16:24.233169 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:24.233175 | orchestrator | 2026-03-09 01:16:24.233181 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-09 01:16:24.233188 | orchestrator | Monday 09 March 2026 01:14:19 +0000 (0:00:00.133) 0:00:44.285 ********** 2026-03-09 01:16:24.233193 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:24.233199 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:24.233205 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:24.233210 | orchestrator | 2026-03-09 01:16:24.233216 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-09 01:16:24.233223 | orchestrator | Monday 09 March 2026 01:14:19 +0000 (0:00:00.644) 0:00:44.929 ********** 2026-03-09 01:16:24.233229 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:16:24.233236 | orchestrator | 2026-03-09 01:16:24.233241 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-09 01:16:24.233247 | orchestrator | Monday 09 March 2026 01:14:20 +0000 (0:00:00.970) 0:00:45.899 ********** 2026-03-09 01:16:24.233257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.233267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.233272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.233276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.233284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.233289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.233298 | orchestrator | 2026-03-09 01:16:24.233306 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-09 01:16:24.233315 | orchestrator | Monday 09 March 2026 01:14:23 +0000 (0:00:02.668) 0:00:48.567 ********** 2026-03-09 01:16:24.233322 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:24.233328 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:16:24.233333 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:16:24.233339 | orchestrator | 2026-03-09 01:16:24.233345 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-09 01:16:24.233351 | orchestrator | Monday 09 March 2026 01:14:23 +0000 (0:00:00.327) 0:00:48.894 ********** 2026-03-09 01:16:24.233357 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:16:24.233364 | orchestrator | 2026-03-09 01:16:24.233370 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-09 01:16:24.233425 | orchestrator | Monday 09 March 2026 01:14:24 +0000 (0:00:00.838) 0:00:49.733 ********** 2026-03-09 01:16:24.233434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.233442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.233457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.233471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.233478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.233485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.233491 | orchestrator | 2026-03-09 01:16:24.233497 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-09 01:16:24.233504 | orchestrator | Monday 09 March 2026 01:14:27 +0000 (0:00:02.745) 0:00:52.478 ********** 2026-03-09 01:16:24.233512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:16:24.233524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:24.233548 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:24.233553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:16:24.233557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:24.233561 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:24.233565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:16:24.233570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:24.233577 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:24.233581 | orchestrator | 2026-03-09 01:16:24.233585 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-09 01:16:24.233589 | orchestrator | Monday 09 March 2026 01:14:28 +0000 (0:00:00.722) 0:00:53.200 ********** 2026-03-09 01:16:24.233597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:16:24.233601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:24.233605 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:24.233609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:16:24.233613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:24.233617 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:24.233888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:16:24.233906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:24.233910 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:24.233914 | orchestrator | 2026-03-09 01:16:24.233918 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-09 01:16:24.233922 | orchestrator | Monday 09 March 2026 01:14:29 +0000 (0:00:01.380) 0:00:54.581 ********** 2026-03-09 01:16:24.233926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.233930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.233935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.233948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.233953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.233957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.233961 | orchestrator | 2026-03-09 01:16:24.233965 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-09 01:16:24.233968 | orchestrator | Monday 09 March 2026 01:14:32 +0000 (0:00:02.515) 0:00:57.096 ********** 2026-03-09 01:16:24.233973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.233980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.233986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.233990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.233994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.233998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.234005 | orchestrator | 2026-03-09 01:16:24.234009 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-09 01:16:24.234041 | orchestrator | Monday 09 March 2026 01:14:37 +0000 (0:00:05.810) 0:01:02.907 ********** 2026-03-09 01:16:24.234047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:16:24.234055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:24.234059 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:24.234063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:16:24.234067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:24.234071 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:24.234083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:16:24.234087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:16:24.234093 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:24.234097 | orchestrator | 2026-03-09 01:16:24.234101 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-09 01:16:24.234105 | orchestrator | Monday 09 March 2026 01:14:38 +0000 (0:00:00.765) 0:01:03.672 ********** 2026-03-09 01:16:24.234109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.234113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.234117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:16:24.234125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.234133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.234137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:16:24.234141 | orchestrator | 2026-03-09 01:16:24.234145 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-09 01:16:24.234149 | orchestrator | Monday 09 March 2026 01:14:41 +0000 (0:00:02.411) 0:01:06.084 ********** 2026-03-09 01:16:24.234152 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:24.234156 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:24.234160 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:24.234164 | orchestrator | 2026-03-09 01:16:24.234167 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-09 01:16:24.234171 | orchestrator | Monday 09 March 2026 01:14:41 +0000 (0:00:00.289) 0:01:06.373 ********** 2026-03-09 01:16:24.234175 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:24.234179 | orchestrator | 2026-03-09 01:16:24.234182 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-09 01:16:24.234186 | orchestrator | Monday 09 March 2026 01:14:43 +0000 (0:00:02.188) 0:01:08.562 ********** 2026-03-09 01:16:24.234194 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:24.234198 | orchestrator | 2026-03-09 01:16:24.234202 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-09 01:16:24.234206 | orchestrator | Monday 09 March 2026 01:14:46 +0000 (0:00:02.441) 0:01:11.003 ********** 2026-03-09 01:16:24.234210 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:24.234214 | orchestrator | 2026-03-09 01:16:24.234217 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-09 01:16:24.234221 | orchestrator | Monday 09 March 2026 01:15:03 +0000 (0:00:17.409) 0:01:28.413 ********** 2026-03-09 01:16:24.234225 | orchestrator | 2026-03-09 01:16:24.234229 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-09 01:16:24.234233 | orchestrator | Monday 09 March 2026 01:15:03 +0000 (0:00:00.119) 0:01:28.532 ********** 2026-03-09 01:16:24.234236 | orchestrator | 2026-03-09 01:16:24.234242 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-09 01:16:24.234247 | orchestrator | Monday 09 March 2026 01:15:03 +0000 (0:00:00.083) 0:01:28.615 ********** 2026-03-09 01:16:24.234253 | orchestrator | 2026-03-09 01:16:24.234259 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-09 01:16:24.234264 | orchestrator | Monday 09 March 2026 01:15:03 +0000 (0:00:00.081) 0:01:28.696 ********** 2026-03-09 01:16:24.234270 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:24.234276 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:24.234281 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:24.234287 | orchestrator | 2026-03-09 01:16:24.234292 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-09 01:16:24.234295 | orchestrator | Monday 09 March 2026 01:15:17 +0000 (0:00:13.774) 0:01:42.471 ********** 2026-03-09 01:16:24.234316 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:24.234320 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:24.234324 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:24.234327 | orchestrator | 2026-03-09 01:16:24.234331 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:16:24.234336 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:16:24.234341 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 01:16:24.234345 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 01:16:24.234349 | orchestrator | 2026-03-09 01:16:24.234352 | orchestrator | 2026-03-09 01:16:24.234356 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:16:24.234360 | orchestrator | Monday 09 March 2026 01:15:27 +0000 (0:00:10.323) 0:01:52.794 ********** 2026-03-09 01:16:24.234364 | orchestrator | =============================================================================== 2026-03-09 01:16:24.234368 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.41s 2026-03-09 01:16:24.234372 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.77s 2026-03-09 01:16:24.234376 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.32s 2026-03-09 01:16:24.234382 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 8.17s 2026-03-09 01:16:24.234386 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.81s 2026-03-09 01:16:24.234390 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 5.02s 2026-03-09 01:16:24.234394 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 4.51s 2026-03-09 01:16:24.234398 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 4.26s 2026-03-09 01:16:24.234401 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.17s 2026-03-09 01:16:24.234471 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.04s 2026-03-09 01:16:24.234476 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.66s 2026-03-09 01:16:24.234480 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.38s 2026-03-09 01:16:24.234484 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.23s 2026-03-09 01:16:24.234488 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.75s 2026-03-09 01:16:24.234492 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.67s 2026-03-09 01:16:24.234495 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.52s 2026-03-09 01:16:24.234499 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.44s 2026-03-09 01:16:24.234503 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.41s 2026-03-09 01:16:24.234507 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.19s 2026-03-09 01:16:24.234512 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.53s 2026-03-09 01:16:24.234517 | orchestrator | 2026-03-09 01:16:24.234521 | orchestrator | 2026-03-09 01:16:24.234525 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:16:24.234530 | orchestrator | 2026-03-09 01:16:24.234534 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:16:24.234539 | orchestrator | Monday 09 March 2026 01:13:11 +0000 (0:00:00.276) 0:00:00.276 ********** 2026-03-09 01:16:24.234544 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:24.234549 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:16:24.234554 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:16:24.234558 | orchestrator | 2026-03-09 01:16:24.234562 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:16:24.234567 | orchestrator | Monday 09 March 2026 01:13:11 +0000 (0:00:00.322) 0:00:00.598 ********** 2026-03-09 01:16:24.234572 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-09 01:16:24.234576 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-09 01:16:24.234581 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-09 01:16:24.234585 | orchestrator | 2026-03-09 01:16:24.234590 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-09 01:16:24.234594 | orchestrator | 2026-03-09 01:16:24.234599 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-09 01:16:24.234603 | orchestrator | Monday 09 March 2026 01:13:11 +0000 (0:00:00.510) 0:00:01.109 ********** 2026-03-09 01:16:24.234608 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:16:24.234613 | orchestrator | 2026-03-09 01:16:24.234618 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-09 01:16:24.234623 | orchestrator | Monday 09 March 2026 01:13:12 +0000 (0:00:00.708) 0:00:01.817 ********** 2026-03-09 01:16:24.234628 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-09 01:16:24.234632 | orchestrator | 2026-03-09 01:16:24.234637 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-09 01:16:24.234641 | orchestrator | Monday 09 March 2026 01:13:16 +0000 (0:00:03.952) 0:00:05.769 ********** 2026-03-09 01:16:24.234645 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-09 01:16:24.234650 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-09 01:16:24.234654 | orchestrator | 2026-03-09 01:16:24.234658 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-09 01:16:24.234662 | orchestrator | Monday 09 March 2026 01:13:24 +0000 (0:00:07.680) 0:00:13.450 ********** 2026-03-09 01:16:24.234666 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:16:24.234673 | orchestrator | 2026-03-09 01:16:24.234677 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-09 01:16:24.234681 | orchestrator | Monday 09 March 2026 01:13:28 +0000 (0:00:03.921) 0:00:17.372 ********** 2026-03-09 01:16:24.234685 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:16:24.234689 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-09 01:16:24.234693 | orchestrator | 2026-03-09 01:16:24.234696 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-09 01:16:24.234700 | orchestrator | Monday 09 March 2026 01:13:32 +0000 (0:00:04.625) 0:00:21.997 ********** 2026-03-09 01:16:24.234704 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:16:24.234708 | orchestrator | 2026-03-09 01:16:24.234711 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-09 01:16:24.234715 | orchestrator | Monday 09 March 2026 01:13:36 +0000 (0:00:03.893) 0:00:25.891 ********** 2026-03-09 01:16:24.234719 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-09 01:16:24.234722 | orchestrator | 2026-03-09 01:16:24.234726 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-09 01:16:24.234733 | orchestrator | Monday 09 March 2026 01:13:41 +0000 (0:00:04.353) 0:00:30.244 ********** 2026-03-09 01:16:24.234737 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:24.234741 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:24.234745 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:24.234749 | orchestrator | 2026-03-09 01:16:24.234754 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-09 01:16:24.234760 | orchestrator | Monday 09 March 2026 01:13:41 +0000 (0:00:00.439) 0:00:30.683 ********** 2026-03-09 01:16:24.234766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:16:24.234774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:16:24.234785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:16:24.234796 | orchestrator | 2026-03-09 01:16:24.234802 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-09 01:16:24.234808 | orchestrator | Monday 09 March 2026 01:13:42 +0000 (0:00:00.998) 0:00:31.682 ********** 2026-03-09 01:16:24.234814 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:24.234820 | orchestrator | 2026-03-09 01:16:24.234826 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-09 01:16:24.234833 | orchestrator | Monday 09 March 2026 01:13:42 +0000 (0:00:00.137) 0:00:31.820 ********** 2026-03-09 01:16:24.234839 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:24.234845 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:24.234851 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:24.234857 | orchestrator | 2026-03-09 01:16:24.234863 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-09 01:16:24.234869 | orchestrator | Monday 09 March 2026 01:13:43 +0000 (0:00:00.573) 0:00:32.394 ********** 2026-03-09 01:16:24.234875 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:16:24.234881 | orchestrator | 2026-03-09 01:16:24.234887 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-09 01:16:24.234894 | orchestrator | Monday 09 March 2026 01:13:43 +0000 (0:00:00.601) 0:00:32.996 ********** 2026-03-09 01:16:24.234906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:16:24.234913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:16:24.234920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:16:24.234932 | orchestrator | 2026-03-09 01:16:24.234939 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-09 01:16:24.234945 | orchestrator | Monday 09 March 2026 01:13:45 +0000 (0:00:01.627) 0:00:34.623 ********** 2026-03-09 01:16:24.234952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:16:24.234958 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:24.234971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:16:24.234978 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:24.234985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:16:24.234991 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:24.234998 | orchestrator | 2026-03-09 01:16:24.235009 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-09 01:16:24.235016 | orchestrator | Monday 09 March 2026 01:13:46 +0000 (0:00:01.017) 0:00:35.640 ********** 2026-03-09 01:16:24.235022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:16:24.235028 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:24.235034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:16:24.235041 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:24.235053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:16:24.235060 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:24.235067 | orchestrator | 2026-03-09 01:16:24.235074 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-09 01:16:24.235081 | orchestrator | Monday 09 March 2026 01:13:47 +0000 (0:00:00.932) 0:00:36.573 ********** 2026-03-09 01:16:24.235088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:16:24.235100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:16:24.235107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:16:24.235114 | orchestrator | 2026-03-09 01:16:24.235121 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-09 01:16:24.235125 | orchestrator | Monday 09 March 2026 01:13:49 +0000 (0:00:01.882) 0:00:38.455 ********** 2026-03-09 01:16:24.235135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:16:24.235139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:16:24.235147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:16:24.235151 | orchestrator | 2026-03-09 01:16:24.235155 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-09 01:16:24.235159 | orchestrator | Monday 09 March 2026 01:13:52 +0000 (0:00:03.211) 0:00:41.667 ********** 2026-03-09 01:16:24.235162 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-09 01:16:24.235166 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-09 01:16:24.235170 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-09 01:16:24.235174 | orchestrator | 2026-03-09 01:16:24.235178 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-09 01:16:24.235182 | orchestrator | Monday 09 March 2026 01:13:54 +0000 (0:00:01.843) 0:00:43.510 ********** 2026-03-09 01:16:24.235185 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:24.235189 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:24.235193 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:24.235197 | orchestrator | 2026-03-09 01:16:24.235201 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-09 01:16:24.235205 | orchestrator | Monday 09 March 2026 01:13:56 +0000 (0:00:01.753) 0:00:45.264 ********** 2026-03-09 01:16:24.235209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:16:24.235215 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:24.235219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:16:24.235226 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:24.235230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:16:24.235234 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:24.235238 | orchestrator | 2026-03-09 01:16:24.235242 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-09 01:16:24.235246 | orchestrator | Monday 09 March 2026 01:13:56 +0000 (0:00:00.619) 0:00:45.883 ********** 2026-03-09 01:16:24.235250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:16:24.235254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:16:24.235262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:16:24.235269 | orchestrator | 2026-03-09 01:16:24.235273 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-09 01:16:24.235277 | orchestrator | Monday 09 March 2026 01:13:58 +0000 (0:00:01.884) 0:00:47.767 ********** 2026-03-09 01:16:24.235280 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:24.235284 | orchestrator | 2026-03-09 01:16:24.235288 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-09 01:16:24.235292 | orchestrator | Monday 09 March 2026 01:14:02 +0000 (0:00:03.505) 0:00:51.273 ********** 2026-03-09 01:16:24.235296 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:24.235299 | orchestrator | 2026-03-09 01:16:24.235303 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-09 01:16:24.235307 | orchestrator | Monday 09 March 2026 01:14:05 +0000 (0:00:03.236) 0:00:54.510 ********** 2026-03-09 01:16:24.235311 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:24.235314 | orchestrator | 2026-03-09 01:16:24.235318 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-09 01:16:24.235322 | orchestrator | Monday 09 March 2026 01:14:18 +0000 (0:00:13.146) 0:01:07.656 ********** 2026-03-09 01:16:24.235325 | orchestrator | 2026-03-09 01:16:24.235329 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-09 01:16:24.235333 | orchestrator | Monday 09 March 2026 01:14:18 +0000 (0:00:00.076) 0:01:07.733 ********** 2026-03-09 01:16:24.235337 | orchestrator | 2026-03-09 01:16:24.235340 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-09 01:16:24.235344 | orchestrator | Monday 09 March 2026 01:14:18 +0000 (0:00:00.067) 0:01:07.801 ********** 2026-03-09 01:16:24.235348 | orchestrator | 2026-03-09 01:16:24.235352 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-09 01:16:24.235355 | orchestrator | Monday 09 March 2026 01:14:18 +0000 (0:00:00.082) 0:01:07.884 ********** 2026-03-09 01:16:24.235359 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:24.235363 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:24.235366 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:24.235370 | orchestrator | 2026-03-09 01:16:24.235374 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:16:24.235378 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:16:24.235382 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 01:16:24.235386 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 01:16:24.235389 | orchestrator | 2026-03-09 01:16:24.235393 | orchestrator | 2026-03-09 01:16:24.235397 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:16:24.235401 | orchestrator | Monday 09 March 2026 01:14:29 +0000 (0:00:10.659) 0:01:18.543 ********** 2026-03-09 01:16:24.235450 | orchestrator | =============================================================================== 2026-03-09 01:16:24.235459 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.15s 2026-03-09 01:16:24.235466 | orchestrator | placement : Restart placement-api container ---------------------------- 10.66s 2026-03-09 01:16:24.235472 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.68s 2026-03-09 01:16:24.235485 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.63s 2026-03-09 01:16:24.235491 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.35s 2026-03-09 01:16:24.235498 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.95s 2026-03-09 01:16:24.235504 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.92s 2026-03-09 01:16:24.235511 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.89s 2026-03-09 01:16:24.235518 | orchestrator | placement : Creating placement databases -------------------------------- 3.51s 2026-03-09 01:16:24.235524 | orchestrator | placement : Creating placement databases user and setting permissions --- 3.24s 2026-03-09 01:16:24.235531 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.21s 2026-03-09 01:16:24.235538 | orchestrator | placement : Check placement containers ---------------------------------- 1.88s 2026-03-09 01:16:24.235544 | orchestrator | placement : Copying over config.json files for services ----------------- 1.88s 2026-03-09 01:16:24.235553 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.84s 2026-03-09 01:16:24.235565 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.75s 2026-03-09 01:16:24.235571 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.63s 2026-03-09 01:16:24.235577 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.02s 2026-03-09 01:16:24.235585 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.00s 2026-03-09 01:16:24.235591 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.93s 2026-03-09 01:16:24.235600 | orchestrator | placement : include_tasks ----------------------------------------------- 0.71s 2026-03-09 01:16:24.235607 | orchestrator | 2026-03-09 01:16:24 | INFO  | Task 1361ab05-c729-45e8-a2fa-8cfe56ae7ce0 is in state SUCCESS 2026-03-09 01:16:24.235614 | orchestrator | 2026-03-09 01:16:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:27.268576 | orchestrator | 2026-03-09 01:16:27.268732 | orchestrator | 2026-03-09 01:16:27 | INFO  | Task 9a9251f3-5c99-4a0f-8d68-85c00815a059 is in state SUCCESS 2026-03-09 01:16:27.269778 | orchestrator | 2026-03-09 01:16:27.269813 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:16:27.269824 | orchestrator | 2026-03-09 01:16:27.269834 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:16:27.269843 | orchestrator | Monday 09 March 2026 01:09:10 +0000 (0:00:00.293) 0:00:00.293 ********** 2026-03-09 01:16:27.269853 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:27.269862 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:16:27.269869 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:16:27.269877 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:16:27.269885 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:16:27.269892 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:16:27.269900 | orchestrator | 2026-03-09 01:16:27.269907 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:16:27.269914 | orchestrator | Monday 09 March 2026 01:09:11 +0000 (0:00:00.727) 0:00:01.020 ********** 2026-03-09 01:16:27.269922 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-09 01:16:27.269954 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-09 01:16:27.270113 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-09 01:16:27.270131 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-09 01:16:27.270143 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-09 01:16:27.270153 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-09 01:16:27.270163 | orchestrator | 2026-03-09 01:16:27.270173 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-09 01:16:27.270183 | orchestrator | 2026-03-09 01:16:27.270194 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-09 01:16:27.270235 | orchestrator | Monday 09 March 2026 01:09:12 +0000 (0:00:00.675) 0:00:01.696 ********** 2026-03-09 01:16:27.270250 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:16:27.270260 | orchestrator | 2026-03-09 01:16:27.270267 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-09 01:16:27.270292 | orchestrator | Monday 09 March 2026 01:09:13 +0000 (0:00:01.414) 0:00:03.110 ********** 2026-03-09 01:16:27.270299 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:16:27.270307 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:16:27.270314 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:27.270321 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:16:27.270328 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:16:27.270339 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:16:27.270351 | orchestrator | 2026-03-09 01:16:27.270391 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-09 01:16:27.270467 | orchestrator | Monday 09 March 2026 01:09:14 +0000 (0:00:01.291) 0:00:04.401 ********** 2026-03-09 01:16:27.270488 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:16:27.270495 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:27.270503 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:16:27.270510 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:16:27.270517 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:16:27.270525 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:16:27.270532 | orchestrator | 2026-03-09 01:16:27.270540 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-09 01:16:27.270547 | orchestrator | Monday 09 March 2026 01:09:15 +0000 (0:00:01.109) 0:00:05.511 ********** 2026-03-09 01:16:27.270555 | orchestrator | ok: [testbed-node-0] => { 2026-03-09 01:16:27.270564 | orchestrator |  "changed": false, 2026-03-09 01:16:27.270571 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:16:27.270579 | orchestrator | } 2026-03-09 01:16:27.270587 | orchestrator | ok: [testbed-node-1] => { 2026-03-09 01:16:27.270594 | orchestrator |  "changed": false, 2026-03-09 01:16:27.270601 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:16:27.270609 | orchestrator | } 2026-03-09 01:16:27.270616 | orchestrator | ok: [testbed-node-2] => { 2026-03-09 01:16:27.270624 | orchestrator |  "changed": false, 2026-03-09 01:16:27.270631 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:16:27.270638 | orchestrator | } 2026-03-09 01:16:27.270646 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 01:16:27.270653 | orchestrator |  "changed": false, 2026-03-09 01:16:27.270660 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:16:27.270667 | orchestrator | } 2026-03-09 01:16:27.270675 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 01:16:27.270682 | orchestrator |  "changed": false, 2026-03-09 01:16:27.270689 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:16:27.270696 | orchestrator | } 2026-03-09 01:16:27.270704 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 01:16:27.270711 | orchestrator |  "changed": false, 2026-03-09 01:16:27.270718 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:16:27.270725 | orchestrator | } 2026-03-09 01:16:27.270732 | orchestrator | 2026-03-09 01:16:27.270740 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-09 01:16:27.270747 | orchestrator | Monday 09 March 2026 01:09:16 +0000 (0:00:00.781) 0:00:06.293 ********** 2026-03-09 01:16:27.270754 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.270761 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.270769 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.270776 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.270784 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.270791 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.270798 | orchestrator | 2026-03-09 01:16:27.270805 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-09 01:16:27.270828 | orchestrator | Monday 09 March 2026 01:09:17 +0000 (0:00:00.570) 0:00:06.863 ********** 2026-03-09 01:16:27.270835 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-09 01:16:27.270842 | orchestrator | 2026-03-09 01:16:27.270850 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-09 01:16:27.270857 | orchestrator | Monday 09 March 2026 01:09:20 +0000 (0:00:03.332) 0:00:10.195 ********** 2026-03-09 01:16:27.270865 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-09 01:16:27.270873 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-09 01:16:27.270880 | orchestrator | 2026-03-09 01:16:27.270902 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-09 01:16:27.270910 | orchestrator | Monday 09 March 2026 01:09:27 +0000 (0:00:07.125) 0:00:17.320 ********** 2026-03-09 01:16:27.270917 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:16:27.270928 | orchestrator | 2026-03-09 01:16:27.270940 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-09 01:16:27.270952 | orchestrator | Monday 09 March 2026 01:09:31 +0000 (0:00:03.550) 0:00:20.871 ********** 2026-03-09 01:16:27.270963 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:16:27.270974 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-09 01:16:27.270986 | orchestrator | 2026-03-09 01:16:27.270998 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-09 01:16:27.271008 | orchestrator | Monday 09 March 2026 01:09:35 +0000 (0:00:04.257) 0:00:25.128 ********** 2026-03-09 01:16:27.271020 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:16:27.271032 | orchestrator | 2026-03-09 01:16:27.271044 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-09 01:16:27.271057 | orchestrator | Monday 09 March 2026 01:09:39 +0000 (0:00:03.755) 0:00:28.884 ********** 2026-03-09 01:16:27.271069 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-09 01:16:27.271081 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-09 01:16:27.271091 | orchestrator | 2026-03-09 01:16:27.271099 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-09 01:16:27.271106 | orchestrator | Monday 09 March 2026 01:09:47 +0000 (0:00:08.281) 0:00:37.166 ********** 2026-03-09 01:16:27.271113 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.271120 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.271127 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.271134 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.271141 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.271149 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.271156 | orchestrator | 2026-03-09 01:16:27.271163 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-09 01:16:27.271170 | orchestrator | Monday 09 March 2026 01:09:48 +0000 (0:00:00.916) 0:00:38.082 ********** 2026-03-09 01:16:27.271177 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.271184 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.271192 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.271199 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.271206 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.271213 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.271220 | orchestrator | 2026-03-09 01:16:27.271227 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-09 01:16:27.271235 | orchestrator | Monday 09 March 2026 01:09:51 +0000 (0:00:02.806) 0:00:40.889 ********** 2026-03-09 01:16:27.271242 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:16:27.271249 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:16:27.271256 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:16:27.271263 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:16:27.271279 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:16:27.271286 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:16:27.271293 | orchestrator | 2026-03-09 01:16:27.271300 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-09 01:16:27.271308 | orchestrator | Monday 09 March 2026 01:09:52 +0000 (0:00:01.087) 0:00:41.977 ********** 2026-03-09 01:16:27.271315 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.271322 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.271330 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.271337 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.271344 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.271351 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.271358 | orchestrator | 2026-03-09 01:16:27.271366 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-09 01:16:27.271373 | orchestrator | Monday 09 March 2026 01:09:54 +0000 (0:00:02.407) 0:00:44.384 ********** 2026-03-09 01:16:27.271383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.271446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.271456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.271465 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:16:27.271480 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:16:27.271488 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:16:27.271496 | orchestrator | 2026-03-09 01:16:27.271503 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-09 01:16:27.271511 | orchestrator | Monday 09 March 2026 01:09:57 +0000 (0:00:03.072) 0:00:47.457 ********** 2026-03-09 01:16:27.271518 | orchestrator | [WARNING]: Skipped 2026-03-09 01:16:27.271526 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-09 01:16:27.271535 | orchestrator | due to this access issue: 2026-03-09 01:16:27.271542 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-09 01:16:27.271549 | orchestrator | a directory 2026-03-09 01:16:27.271557 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:16:27.271564 | orchestrator | 2026-03-09 01:16:27.271576 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-09 01:16:27.271584 | orchestrator | Monday 09 March 2026 01:09:58 +0000 (0:00:00.816) 0:00:48.273 ********** 2026-03-09 01:16:27.271592 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:16:27.271600 | orchestrator | 2026-03-09 01:16:27.271608 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-09 01:16:27.271615 | orchestrator | Monday 09 March 2026 01:09:59 +0000 (0:00:01.195) 0:00:49.469 ********** 2026-03-09 01:16:27.271623 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:16:27.271636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.271644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.271652 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:16:27.271665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.271673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:16:27.271687 | orchestrator | 2026-03-09 01:16:27.271694 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-09 01:16:27.271702 | orchestrator | Monday 09 March 2026 01:10:02 +0000 (0:00:03.181) 0:00:52.650 ********** 2026-03-09 01:16:27.271714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.271727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.271740 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.271752 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.271772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.271785 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.271798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.271820 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.271834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.271846 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.271860 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.271871 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.271878 | orchestrator | 2026-03-09 01:16:27.271885 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-09 01:16:27.271893 | orchestrator | Monday 09 March 2026 01:10:06 +0000 (0:00:03.737) 0:00:56.387 ********** 2026-03-09 01:16:27.271900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.271908 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.271921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.271935 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.271943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.271950 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.271958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.271965 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.271973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.271980 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.271988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.271996 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.272003 | orchestrator | 2026-03-09 01:16:27.272010 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-09 01:16:27.272027 | orchestrator | Monday 09 March 2026 01:10:10 +0000 (0:00:03.438) 0:00:59.825 ********** 2026-03-09 01:16:27.272034 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.272041 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.272048 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.272056 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.272063 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.272070 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.272077 | orchestrator | 2026-03-09 01:16:27.272084 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-09 01:16:27.272091 | orchestrator | Monday 09 March 2026 01:10:13 +0000 (0:00:03.644) 0:01:03.470 ********** 2026-03-09 01:16:27.272098 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.272120 | orchestrator | 2026-03-09 01:16:27.272127 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-09 01:16:27.272134 | orchestrator | Monday 09 March 2026 01:10:14 +0000 (0:00:00.238) 0:01:03.708 ********** 2026-03-09 01:16:27.272142 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.272148 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.272156 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.272163 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.272170 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.272177 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.272184 | orchestrator | 2026-03-09 01:16:27.272191 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-09 01:16:27.272199 | orchestrator | Monday 09 March 2026 01:10:15 +0000 (0:00:01.816) 0:01:05.525 ********** 2026-03-09 01:16:27.272206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.272214 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.272222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.272230 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.272237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.272250 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.272595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.272609 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.272617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.272624 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.272632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.272640 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.272647 | orchestrator | 2026-03-09 01:16:27.272655 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-09 01:16:27.272662 | orchestrator | Monday 09 March 2026 01:10:21 +0000 (0:00:05.166) 0:01:10.691 ********** 2026-03-09 01:16:27.272670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.272685 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:16:27.272698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.272707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.272715 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:16:27.272723 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:16:27.272736 | orchestrator | 2026-03-09 01:16:27.272743 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-09 01:16:27.272751 | orchestrator | Monday 09 March 2026 01:10:28 +0000 (0:00:07.792) 0:01:18.483 ********** 2026-03-09 01:16:27.272762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.272770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.272778 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:16:27.272787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:16:27.272800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.272811 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:16:27.272819 | orchestrator | 2026-03-09 01:16:27.272827 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-09 01:16:27.272834 | orchestrator | Monday 09 March 2026 01:10:41 +0000 (0:00:12.826) 0:01:31.310 ********** 2026-03-09 01:16:27.272842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.272849 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.272857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.272865 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.272875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.272883 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.272891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.272898 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.272910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.272918 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.272926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.272933 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.272941 | orchestrator | 2026-03-09 01:16:27.272951 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-09 01:16:27.272962 | orchestrator | Monday 09 March 2026 01:10:48 +0000 (0:00:06.447) 0:01:37.758 ********** 2026-03-09 01:16:27.272974 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.272986 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.272997 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:27.273016 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:27.273028 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.273040 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:27.273051 | orchestrator | 2026-03-09 01:16:27.273063 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-09 01:16:27.273075 | orchestrator | Monday 09 March 2026 01:10:53 +0000 (0:00:05.373) 0:01:43.132 ********** 2026-03-09 01:16:27.273088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.273097 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.273105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.273113 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.273127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.273135 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.273142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.273150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.273163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.273171 | orchestrator | 2026-03-09 01:16:27.273178 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-09 01:16:27.273186 | orchestrator | Monday 09 March 2026 01:11:00 +0000 (0:00:07.355) 0:01:50.487 ********** 2026-03-09 01:16:27.273193 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.273201 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.273210 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.273217 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.273226 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.273235 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.273243 | orchestrator | 2026-03-09 01:16:27.273252 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-09 01:16:27.273260 | orchestrator | Monday 09 March 2026 01:11:04 +0000 (0:00:03.888) 0:01:54.375 ********** 2026-03-09 01:16:27.273269 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.273277 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.273286 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.273294 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.273303 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.273311 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.273319 | orchestrator | 2026-03-09 01:16:27.273328 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-09 01:16:27.273337 | orchestrator | Monday 09 March 2026 01:11:08 +0000 (0:00:03.409) 0:01:57.784 ********** 2026-03-09 01:16:27.273349 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.273358 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.273366 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.273374 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.273383 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.273392 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.273400 | orchestrator | 2026-03-09 01:16:27.273435 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-09 01:16:27.273444 | orchestrator | Monday 09 March 2026 01:11:11 +0000 (0:00:03.264) 0:02:01.049 ********** 2026-03-09 01:16:27.273452 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.273461 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.273469 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.273502 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.273511 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.273519 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.273528 | orchestrator | 2026-03-09 01:16:27.273536 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-09 01:16:27.273545 | orchestrator | Monday 09 March 2026 01:11:14 +0000 (0:00:02.633) 0:02:03.683 ********** 2026-03-09 01:16:27.273554 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.273562 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.273570 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.273577 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.273584 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.273591 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.273598 | orchestrator | 2026-03-09 01:16:27.273606 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-09 01:16:27.273613 | orchestrator | Monday 09 March 2026 01:11:18 +0000 (0:00:04.581) 0:02:08.265 ********** 2026-03-09 01:16:27.273620 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.273628 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.273635 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.273642 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.273649 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.273656 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.273664 | orchestrator | 2026-03-09 01:16:27.273671 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-09 01:16:27.273678 | orchestrator | Monday 09 March 2026 01:11:23 +0000 (0:00:05.299) 0:02:13.564 ********** 2026-03-09 01:16:27.273685 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:16:27.273693 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.273701 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:16:27.273708 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.273715 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:16:27.273723 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.273730 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:16:27.273737 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.273745 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:16:27.273752 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.273759 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:16:27.273767 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.273774 | orchestrator | 2026-03-09 01:16:27.273785 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-09 01:16:27.273797 | orchestrator | Monday 09 March 2026 01:11:26 +0000 (0:00:02.668) 0:02:16.232 ********** 2026-03-09 01:16:27.273810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.273841 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.273862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.273874 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.273887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.273899 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.273911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.273923 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.273934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.273945 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.273956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.273976 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.273988 | orchestrator | 2026-03-09 01:16:27.274000 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-09 01:16:27.274046 | orchestrator | Monday 09 March 2026 01:11:28 +0000 (0:00:01.964) 0:02:18.197 ********** 2026-03-09 01:16:27.274072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.274081 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.274089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.274097 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.274104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.274112 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.274119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.274137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.274145 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.274152 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.274160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.274167 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.274175 | orchestrator | 2026-03-09 01:16:27.274182 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-09 01:16:27.274190 | orchestrator | Monday 09 March 2026 01:11:32 +0000 (0:00:03.519) 0:02:21.716 ********** 2026-03-09 01:16:27.274197 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.274204 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.274211 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.274218 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.274225 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.274232 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.274239 | orchestrator | 2026-03-09 01:16:27.274247 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-09 01:16:27.274254 | orchestrator | Monday 09 March 2026 01:11:36 +0000 (0:00:04.122) 0:02:25.838 ********** 2026-03-09 01:16:27.274261 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.274268 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.274275 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.274283 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:16:27.274290 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:16:27.274297 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:16:27.274304 | orchestrator | 2026-03-09 01:16:27.274312 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-09 01:16:27.274319 | orchestrator | Monday 09 March 2026 01:11:40 +0000 (0:00:04.784) 0:02:30.623 ********** 2026-03-09 01:16:27.274326 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.274333 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.274341 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.274354 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.274361 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.274369 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.274376 | orchestrator | 2026-03-09 01:16:27.274383 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-09 01:16:27.274391 | orchestrator | Monday 09 March 2026 01:11:44 +0000 (0:00:03.611) 0:02:34.234 ********** 2026-03-09 01:16:27.274398 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.274421 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.274429 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.274436 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.274443 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.274451 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.274458 | orchestrator | 2026-03-09 01:16:27.274465 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-09 01:16:27.274473 | orchestrator | Monday 09 March 2026 01:11:50 +0000 (0:00:06.129) 0:02:40.364 ********** 2026-03-09 01:16:27.274480 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.274487 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.274494 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.274501 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.274509 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.274516 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.274523 | orchestrator | 2026-03-09 01:16:27.274531 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-09 01:16:27.274621 | orchestrator | Monday 09 March 2026 01:11:54 +0000 (0:00:04.175) 0:02:44.540 ********** 2026-03-09 01:16:27.274631 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.274639 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.274646 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.274653 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.274660 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.274668 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.274675 | orchestrator | 2026-03-09 01:16:27.274682 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-09 01:16:27.274690 | orchestrator | Monday 09 March 2026 01:11:59 +0000 (0:00:04.773) 0:02:49.313 ********** 2026-03-09 01:16:27.274697 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.274704 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.274712 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.274719 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.274726 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.274734 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.274741 | orchestrator | 2026-03-09 01:16:27.274748 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-09 01:16:27.274756 | orchestrator | Monday 09 March 2026 01:12:04 +0000 (0:00:05.046) 0:02:54.360 ********** 2026-03-09 01:16:27.274763 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.274770 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.274777 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.274785 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.274792 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.274799 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.274807 | orchestrator | 2026-03-09 01:16:27.274814 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-09 01:16:27.274826 | orchestrator | Monday 09 March 2026 01:12:10 +0000 (0:00:06.184) 0:03:00.544 ********** 2026-03-09 01:16:27.274834 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.274841 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.274848 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.274855 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.274862 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.274870 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.274884 | orchestrator | 2026-03-09 01:16:27.274891 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-09 01:16:27.274899 | orchestrator | Monday 09 March 2026 01:12:13 +0000 (0:00:03.065) 0:03:03.609 ********** 2026-03-09 01:16:27.274906 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:16:27.274915 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.274922 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:16:27.274929 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.274937 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:16:27.274944 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.274951 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:16:27.274958 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:16:27.274966 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.274973 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.274980 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:16:27.274987 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.274995 | orchestrator | 2026-03-09 01:16:27.275002 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-09 01:16:27.275009 | orchestrator | Monday 09 March 2026 01:12:18 +0000 (0:00:04.968) 0:03:08.577 ********** 2026-03-09 01:16:27.275017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.275025 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.275098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.275106 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.275120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:16:27.275134 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.275141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.275149 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.275157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.275164 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.275172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:16:27.275180 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.275187 | orchestrator | 2026-03-09 01:16:27.275195 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-09 01:16:27.275202 | orchestrator | Monday 09 March 2026 01:12:23 +0000 (0:00:04.252) 0:03:12.830 ********** 2026-03-09 01:16:27.275210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.275228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.275236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:16:27.275244 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:16:27.275253 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:16:27.275261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:16:27.275273 | orchestrator | 2026-03-09 01:16:27.275281 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-09 01:16:27.275288 | orchestrator | Monday 09 March 2026 01:12:28 +0000 (0:00:05.162) 0:03:17.993 ********** 2026-03-09 01:16:27.275295 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:16:27.275303 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:16:27.275310 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:16:27.275317 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:16:27.275324 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:16:27.275336 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:16:27.275343 | orchestrator | 2026-03-09 01:16:27.275351 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-09 01:16:27.275358 | orchestrator | Monday 09 March 2026 01:12:29 +0000 (0:00:01.655) 0:03:19.648 ********** 2026-03-09 01:16:27.275365 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:27.275373 | orchestrator | 2026-03-09 01:16:27.275380 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-09 01:16:27.275387 | orchestrator | Monday 09 March 2026 01:12:32 +0000 (0:00:02.469) 0:03:22.118 ********** 2026-03-09 01:16:27.275395 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:27.275402 | orchestrator | 2026-03-09 01:16:27.275432 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-09 01:16:27.275440 | orchestrator | Monday 09 March 2026 01:12:34 +0000 (0:00:02.525) 0:03:24.644 ********** 2026-03-09 01:16:27.275447 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:27.275454 | orchestrator | 2026-03-09 01:16:27.275462 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:16:27.275469 | orchestrator | Monday 09 March 2026 01:13:23 +0000 (0:00:48.263) 0:04:12.907 ********** 2026-03-09 01:16:27.275476 | orchestrator | 2026-03-09 01:16:27.275484 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:16:27.275491 | orchestrator | Monday 09 March 2026 01:13:23 +0000 (0:00:00.071) 0:04:12.978 ********** 2026-03-09 01:16:27.275498 | orchestrator | 2026-03-09 01:16:27.275506 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:16:27.275513 | orchestrator | Monday 09 March 2026 01:13:23 +0000 (0:00:00.377) 0:04:13.356 ********** 2026-03-09 01:16:27.275520 | orchestrator | 2026-03-09 01:16:27.275528 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:16:27.275535 | orchestrator | Monday 09 March 2026 01:13:23 +0000 (0:00:00.126) 0:04:13.482 ********** 2026-03-09 01:16:27.275542 | orchestrator | 2026-03-09 01:16:27.275550 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:16:27.275557 | orchestrator | Monday 09 March 2026 01:13:24 +0000 (0:00:00.206) 0:04:13.689 ********** 2026-03-09 01:16:27.275564 | orchestrator | 2026-03-09 01:16:27.275572 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:16:27.275579 | orchestrator | Monday 09 March 2026 01:13:24 +0000 (0:00:00.238) 0:04:13.928 ********** 2026-03-09 01:16:27.275586 | orchestrator | 2026-03-09 01:16:27.275594 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-09 01:16:27.275601 | orchestrator | Monday 09 March 2026 01:13:24 +0000 (0:00:00.242) 0:04:14.170 ********** 2026-03-09 01:16:27.275608 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:16:27.275616 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:16:27.275623 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:16:27.275631 | orchestrator | 2026-03-09 01:16:27.275638 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-09 01:16:27.275651 | orchestrator | Monday 09 March 2026 01:13:53 +0000 (0:00:28.687) 0:04:42.858 ********** 2026-03-09 01:16:27.275658 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:16:27.275666 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:16:27.275673 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:16:27.275680 | orchestrator | 2026-03-09 01:16:27.275688 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:16:27.275696 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 01:16:27.275705 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-09 01:16:27.275712 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-09 01:16:27.275720 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 01:16:27.275727 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 01:16:27.275737 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 01:16:27.275745 | orchestrator | 2026-03-09 01:16:27.275754 | orchestrator | 2026-03-09 01:16:27.275762 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:16:27.275771 | orchestrator | Monday 09 March 2026 01:14:44 +0000 (0:00:51.620) 0:05:34.479 ********** 2026-03-09 01:16:27.275779 | orchestrator | =============================================================================== 2026-03-09 01:16:27.275788 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 51.62s 2026-03-09 01:16:27.275796 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 48.26s 2026-03-09 01:16:27.275805 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.69s 2026-03-09 01:16:27.275813 | orchestrator | neutron : Copying over neutron.conf ------------------------------------ 12.83s 2026-03-09 01:16:27.275822 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.28s 2026-03-09 01:16:27.275830 | orchestrator | neutron : Copying over config.json files for services ------------------- 7.79s 2026-03-09 01:16:27.275839 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 7.36s 2026-03-09 01:16:27.275848 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.13s 2026-03-09 01:16:27.275860 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 6.45s 2026-03-09 01:16:27.275870 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 6.18s 2026-03-09 01:16:27.275878 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 6.13s 2026-03-09 01:16:27.275886 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 5.37s 2026-03-09 01:16:27.275893 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 5.30s 2026-03-09 01:16:27.275900 | orchestrator | neutron : Copying over existing policy file ----------------------------- 5.17s 2026-03-09 01:16:27.275907 | orchestrator | neutron : Check neutron containers -------------------------------------- 5.16s 2026-03-09 01:16:27.275915 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 5.05s 2026-03-09 01:16:27.275922 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 4.97s 2026-03-09 01:16:27.275929 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.78s 2026-03-09 01:16:27.275937 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 4.77s 2026-03-09 01:16:27.275952 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 4.58s 2026-03-09 01:16:27.275960 | orchestrator | 2026-03-09 01:16:27 | INFO  | Task 6f6c3ccd-cda3-46d3-8331-ac199cb157b5 is in state STARTED 2026-03-09 01:16:27.275967 | orchestrator | 2026-03-09 01:16:27 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:16:27.275974 | orchestrator | 2026-03-09 01:16:27 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:16:27.275982 | orchestrator | 2026-03-09 01:16:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:30.308157 | orchestrator | 2026-03-09 01:16:30 | INFO  | Task 6f6c3ccd-cda3-46d3-8331-ac199cb157b5 is in state STARTED 2026-03-09 01:16:30.309194 | orchestrator | 2026-03-09 01:16:30 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:16:30.311398 | orchestrator | 2026-03-09 01:16:30 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:16:30.311575 | orchestrator | 2026-03-09 01:16:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:33.352859 | orchestrator | 2026-03-09 01:16:33 | INFO  | Task 6f6c3ccd-cda3-46d3-8331-ac199cb157b5 is in state STARTED 2026-03-09 01:16:33.354172 | orchestrator | 2026-03-09 01:16:33 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:16:33.356738 | orchestrator | 2026-03-09 01:16:33 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:16:33.356798 | orchestrator | 2026-03-09 01:16:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:36.411504 | orchestrator | 2026-03-09 01:16:36 | INFO  | Task 6f6c3ccd-cda3-46d3-8331-ac199cb157b5 is in state STARTED 2026-03-09 01:16:36.412317 | orchestrator | 2026-03-09 01:16:36 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:16:36.414319 | orchestrator | 2026-03-09 01:16:36 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:16:36.414375 | orchestrator | 2026-03-09 01:16:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:39.482162 | orchestrator | 2026-03-09 01:16:39 | INFO  | Task 6f6c3ccd-cda3-46d3-8331-ac199cb157b5 is in state STARTED 2026-03-09 01:16:39.484621 | orchestrator | 2026-03-09 01:16:39 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:16:39.487072 | orchestrator | 2026-03-09 01:16:39 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:16:39.487121 | orchestrator | 2026-03-09 01:16:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:42.531515 | orchestrator | 2026-03-09 01:16:42 | INFO  | Task 6f6c3ccd-cda3-46d3-8331-ac199cb157b5 is in state STARTED 2026-03-09 01:16:42.533215 | orchestrator | 2026-03-09 01:16:42 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:16:42.535869 | orchestrator | 2026-03-09 01:16:42 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:16:42.535917 | orchestrator | 2026-03-09 01:16:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:45.581572 | orchestrator | 2026-03-09 01:16:45 | INFO  | Task 6f6c3ccd-cda3-46d3-8331-ac199cb157b5 is in state STARTED 2026-03-09 01:16:45.582947 | orchestrator | 2026-03-09 01:16:45 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:16:45.585040 | orchestrator | 2026-03-09 01:16:45 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:16:45.585094 | orchestrator | 2026-03-09 01:16:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:48.635735 | orchestrator | 2026-03-09 01:16:48 | INFO  | Task 6f6c3ccd-cda3-46d3-8331-ac199cb157b5 is in state STARTED 2026-03-09 01:16:48.637071 | orchestrator | 2026-03-09 01:16:48 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:16:48.638494 | orchestrator | 2026-03-09 01:16:48 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:16:48.638550 | orchestrator | 2026-03-09 01:16:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:51.683332 | orchestrator | 2026-03-09 01:16:51 | INFO  | Task 6f6c3ccd-cda3-46d3-8331-ac199cb157b5 is in state STARTED 2026-03-09 01:16:51.685366 | orchestrator | 2026-03-09 01:16:51 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:16:51.687886 | orchestrator | 2026-03-09 01:16:51 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:16:51.687932 | orchestrator | 2026-03-09 01:16:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:54.719319 | orchestrator | 2026-03-09 01:16:54 | INFO  | Task 6f6c3ccd-cda3-46d3-8331-ac199cb157b5 is in state STARTED 2026-03-09 01:16:54.720337 | orchestrator | 2026-03-09 01:16:54 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:16:54.721138 | orchestrator | 2026-03-09 01:16:54 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:16:54.721190 | orchestrator | 2026-03-09 01:16:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:57.764266 | orchestrator | 2026-03-09 01:16:57 | INFO  | Task 6f6c3ccd-cda3-46d3-8331-ac199cb157b5 is in state STARTED 2026-03-09 01:16:57.766006 | orchestrator | 2026-03-09 01:16:57 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:16:57.767787 | orchestrator | 2026-03-09 01:16:57 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:16:57.767846 | orchestrator | 2026-03-09 01:16:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:00.811834 | orchestrator | 2026-03-09 01:17:00 | INFO  | Task 6f6c3ccd-cda3-46d3-8331-ac199cb157b5 is in state STARTED 2026-03-09 01:17:00.813633 | orchestrator | 2026-03-09 01:17:00 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:17:00.815958 | orchestrator | 2026-03-09 01:17:00 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:00.816030 | orchestrator | 2026-03-09 01:17:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:03.862709 | orchestrator | 2026-03-09 01:17:03 | INFO  | Task 6f6c3ccd-cda3-46d3-8331-ac199cb157b5 is in state STARTED 2026-03-09 01:17:03.864872 | orchestrator | 2026-03-09 01:17:03 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:17:03.867214 | orchestrator | 2026-03-09 01:17:03 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:03.867266 | orchestrator | 2026-03-09 01:17:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:06.916717 | orchestrator | 2026-03-09 01:17:06 | INFO  | Task 6f6c3ccd-cda3-46d3-8331-ac199cb157b5 is in state STARTED 2026-03-09 01:17:06.918652 | orchestrator | 2026-03-09 01:17:06 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state STARTED 2026-03-09 01:17:06.921442 | orchestrator | 2026-03-09 01:17:06 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:06.921640 | orchestrator | 2026-03-09 01:17:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:09.973793 | orchestrator | 2026-03-09 01:17:09 | INFO  | Task 6f6c3ccd-cda3-46d3-8331-ac199cb157b5 is in state SUCCESS 2026-03-09 01:17:09.973932 | orchestrator | 2026-03-09 01:17:09.976164 | orchestrator | 2026-03-09 01:17:09.976225 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:17:09.976240 | orchestrator | 2026-03-09 01:17:09.976251 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:17:09.976294 | orchestrator | Monday 09 March 2026 01:14:35 +0000 (0:00:00.320) 0:00:00.320 ********** 2026-03-09 01:17:09.976307 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:09.976320 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:17:09.976375 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:17:09.976384 | orchestrator | 2026-03-09 01:17:09.976436 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:17:09.976445 | orchestrator | Monday 09 March 2026 01:14:35 +0000 (0:00:00.411) 0:00:00.731 ********** 2026-03-09 01:17:09.976452 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-09 01:17:09.976459 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-09 01:17:09.976466 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-09 01:17:09.976473 | orchestrator | 2026-03-09 01:17:09.976480 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-09 01:17:09.976487 | orchestrator | 2026-03-09 01:17:09.976494 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-09 01:17:09.976501 | orchestrator | Monday 09 March 2026 01:14:36 +0000 (0:00:00.557) 0:00:01.289 ********** 2026-03-09 01:17:09.976510 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:09.976518 | orchestrator | 2026-03-09 01:17:09.976524 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-09 01:17:09.976531 | orchestrator | Monday 09 March 2026 01:14:36 +0000 (0:00:00.667) 0:00:01.957 ********** 2026-03-09 01:17:09.976612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:17:09.976624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:17:09.976645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:17:09.976667 | orchestrator | 2026-03-09 01:17:09.976674 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-09 01:17:09.976681 | orchestrator | Monday 09 March 2026 01:14:37 +0000 (0:00:00.786) 0:00:02.743 ********** 2026-03-09 01:17:09.976692 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-09 01:17:09.976706 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-09 01:17:09.976721 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:17:09.976733 | orchestrator | 2026-03-09 01:17:09.976744 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-09 01:17:09.976755 | orchestrator | Monday 09 March 2026 01:14:38 +0000 (0:00:01.061) 0:00:03.805 ********** 2026-03-09 01:17:09.976766 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:09.976778 | orchestrator | 2026-03-09 01:17:09.976791 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-09 01:17:09.976803 | orchestrator | Monday 09 March 2026 01:14:39 +0000 (0:00:00.769) 0:00:04.574 ********** 2026-03-09 01:17:09.976832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:17:09.976843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:17:09.976852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:17:09.976861 | orchestrator | 2026-03-09 01:17:09.976869 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-09 01:17:09.976876 | orchestrator | Monday 09 March 2026 01:14:40 +0000 (0:00:01.275) 0:00:05.849 ********** 2026-03-09 01:17:09.976890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 01:17:09.976905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 01:17:09.976914 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.976922 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.976937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 01:17:09.976947 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.976955 | orchestrator | 2026-03-09 01:17:09.976964 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-09 01:17:09.976972 | orchestrator | Monday 09 March 2026 01:14:41 +0000 (0:00:00.410) 0:00:06.260 ********** 2026-03-09 01:17:09.976980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 01:17:09.976989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 01:17:09.976998 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.977006 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.977017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 01:17:09.977041 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.977055 | orchestrator | 2026-03-09 01:17:09.977066 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-09 01:17:09.977082 | orchestrator | Monday 09 March 2026 01:14:42 +0000 (0:00:00.907) 0:00:07.168 ********** 2026-03-09 01:17:09.977094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:17:09.977115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:17:09.977127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:17:09.977139 | orchestrator | 2026-03-09 01:17:09.977148 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-09 01:17:09.977155 | orchestrator | Monday 09 March 2026 01:14:43 +0000 (0:00:01.295) 0:00:08.464 ********** 2026-03-09 01:17:09.977162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:17:09.977169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:17:09.977187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:17:09.977194 | orchestrator | 2026-03-09 01:17:09.977201 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-09 01:17:09.977208 | orchestrator | Monday 09 March 2026 01:14:45 +0000 (0:00:01.858) 0:00:10.323 ********** 2026-03-09 01:17:09.977215 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.977222 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.977229 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.977235 | orchestrator | 2026-03-09 01:17:09.977242 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-09 01:17:09.977249 | orchestrator | Monday 09 March 2026 01:14:46 +0000 (0:00:01.127) 0:00:11.450 ********** 2026-03-09 01:17:09.977255 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-09 01:17:09.977263 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-09 01:17:09.977270 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-09 01:17:09.977277 | orchestrator | 2026-03-09 01:17:09.977283 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-09 01:17:09.977290 | orchestrator | Monday 09 March 2026 01:14:48 +0000 (0:00:01.882) 0:00:13.332 ********** 2026-03-09 01:17:09.977297 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-09 01:17:09.977308 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-09 01:17:09.977315 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-09 01:17:09.977322 | orchestrator | 2026-03-09 01:17:09.977329 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-09 01:17:09.977335 | orchestrator | Monday 09 March 2026 01:14:50 +0000 (0:00:01.900) 0:00:15.233 ********** 2026-03-09 01:17:09.977342 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:17:09.977350 | orchestrator | 2026-03-09 01:17:09.977361 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-09 01:17:09.977377 | orchestrator | Monday 09 March 2026 01:14:51 +0000 (0:00:01.219) 0:00:16.452 ********** 2026-03-09 01:17:09.977412 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-09 01:17:09.977424 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-09 01:17:09.977434 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:09.977445 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:17:09.977456 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:17:09.977468 | orchestrator | 2026-03-09 01:17:09.977479 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-09 01:17:09.977490 | orchestrator | Monday 09 March 2026 01:14:52 +0000 (0:00:00.818) 0:00:17.271 ********** 2026-03-09 01:17:09.977510 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.977518 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.977529 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.977546 | orchestrator | 2026-03-09 01:17:09.977557 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-09 01:17:09.977567 | orchestrator | Monday 09 March 2026 01:14:52 +0000 (0:00:00.697) 0:00:17.969 ********** 2026-03-09 01:17:09.977579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094142, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1613958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094142, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1613958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094142, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1613958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094201, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.175396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094201, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.175396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094201, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.175396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094158, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.165396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094158, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.165396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094158, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.165396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094205, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.178396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094205, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.178396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094205, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.178396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094172, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.168902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094172, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.168902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094172, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.168902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094190, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.173043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094190, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.173043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094190, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.173043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094141, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1606994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094141, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1606994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094141, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1606994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094149, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.163562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.977881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094149, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.163562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094149, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.163562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094161, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.165396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094161, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.165396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094161, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.165396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094177, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1694994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094177, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1694994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094177, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1694994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094197, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.174396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094197, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.174396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094197, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.174396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094152, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1643958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094152, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1643958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094152, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1643958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094186, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.171396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094186, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.171396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094186, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.171396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094175, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1694994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094175, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1694994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094175, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1694994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094168, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.168396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094168, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.168396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094168, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.168396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094165, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1669736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094165, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1669736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094165, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1669736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094182, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1703959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094182, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1703959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094182, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1703959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094162, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.166461, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094162, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.166461, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094162, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.166461, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094193, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.173396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094193, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.173396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094193, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.173396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094297, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2103965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094297, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2103965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094297, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2103965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094235, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1903963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094235, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1903963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094235, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1903963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094228, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1842606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094228, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1842606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094228, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1842606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094256, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1943963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094256, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1943963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094256, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1943963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.978930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094218, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.180099, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094218, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.180099, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094218, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.180099, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094273, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.203435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094273, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.203435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094273, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.203435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094257, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2003965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094257, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2003965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094257, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2003965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094276, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2038822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094276, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2038822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094276, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2038822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094292, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2083967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094292, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2083967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094292, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2083967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094271, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2024982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094271, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2024982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094271, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2024982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094247, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1933963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094247, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1933963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094247, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1933963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094232, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1865866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094232, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1865866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094232, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1865866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094243, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1921585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094243, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1921585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094243, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1921585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094231, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1853962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094231, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1853962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094231, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1853962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094252, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1941876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094252, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1941876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094252, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1941876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094286, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.208203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094286, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.208203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094286, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.208203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094284, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2063966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094284, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2063966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094284, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2063966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094220, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.180396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094220, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.180396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094220, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.180396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094224, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1823962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094224, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1823962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094224, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.1823962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094268, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2013037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094268, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2013037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094268, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2013037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094279, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2043965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094279, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2043965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094279, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773015581.2043965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:17:09.979695 | orchestrator | 2026-03-09 01:17:09.979703 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-09 01:17:09.979710 | orchestrator | Monday 09 March 2026 01:15:31 +0000 (0:00:38.674) 0:00:56.643 ********** 2026-03-09 01:17:09.979717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:17:09.979727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:17:09.979735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:17:09.979742 | orchestrator | 2026-03-09 01:17:09.979749 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-09 01:17:09.979757 | orchestrator | Monday 09 March 2026 01:15:32 +0000 (0:00:01.308) 0:00:57.951 ********** 2026-03-09 01:17:09.979765 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.979774 | orchestrator | 2026-03-09 01:17:09.979782 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-09 01:17:09.979793 | orchestrator | Monday 09 March 2026 01:15:35 +0000 (0:00:02.548) 0:01:00.500 ********** 2026-03-09 01:17:09.979802 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.979810 | orchestrator | 2026-03-09 01:17:09.979818 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-09 01:17:09.979826 | orchestrator | Monday 09 March 2026 01:15:37 +0000 (0:00:02.574) 0:01:03.074 ********** 2026-03-09 01:17:09.979834 | orchestrator | 2026-03-09 01:17:09.979842 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-09 01:17:09.979850 | orchestrator | Monday 09 March 2026 01:15:38 +0000 (0:00:00.077) 0:01:03.152 ********** 2026-03-09 01:17:09.979858 | orchestrator | 2026-03-09 01:17:09.979866 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-09 01:17:09.979874 | orchestrator | Monday 09 March 2026 01:15:38 +0000 (0:00:00.098) 0:01:03.250 ********** 2026-03-09 01:17:09.979882 | orchestrator | 2026-03-09 01:17:09.979890 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-09 01:17:09.979902 | orchestrator | Monday 09 March 2026 01:15:38 +0000 (0:00:00.291) 0:01:03.542 ********** 2026-03-09 01:17:09.979910 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.979918 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.979926 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.979935 | orchestrator | 2026-03-09 01:17:09.979943 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-09 01:17:09.979951 | orchestrator | Monday 09 March 2026 01:15:40 +0000 (0:00:02.014) 0:01:05.557 ********** 2026-03-09 01:17:09.979959 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.979968 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.979975 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-09 01:17:09.979984 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-09 01:17:09.979992 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-09 01:17:09.980001 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-03-09 01:17:09.980009 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:09.980017 | orchestrator | 2026-03-09 01:17:09.980025 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-09 01:17:09.980033 | orchestrator | Monday 09 March 2026 01:16:32 +0000 (0:00:51.759) 0:01:57.316 ********** 2026-03-09 01:17:09.980041 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.980049 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:09.980057 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:09.980065 | orchestrator | 2026-03-09 01:17:09.980074 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-09 01:17:09.980082 | orchestrator | Monday 09 March 2026 01:17:00 +0000 (0:00:28.044) 0:02:25.361 ********** 2026-03-09 01:17:09.980090 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:09.980530 | orchestrator | 2026-03-09 01:17:09.980555 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-09 01:17:09.980562 | orchestrator | Monday 09 March 2026 01:17:02 +0000 (0:00:02.616) 0:02:27.977 ********** 2026-03-09 01:17:09.980569 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.980577 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.980583 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.980590 | orchestrator | 2026-03-09 01:17:09.980597 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-09 01:17:09.980603 | orchestrator | Monday 09 March 2026 01:17:03 +0000 (0:00:00.603) 0:02:28.580 ********** 2026-03-09 01:17:09.980612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-09 01:17:09.980628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-09 01:17:09.980636 | orchestrator | 2026-03-09 01:17:09.980643 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-09 01:17:09.980650 | orchestrator | Monday 09 March 2026 01:17:06 +0000 (0:00:02.877) 0:02:31.458 ********** 2026-03-09 01:17:09.980657 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.980664 | orchestrator | 2026-03-09 01:17:09.980670 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:17:09.980678 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:17:09.980705 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:17:09.980712 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:17:09.980719 | orchestrator | 2026-03-09 01:17:09.980726 | orchestrator | 2026-03-09 01:17:09.980733 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:17:09.980740 | orchestrator | Monday 09 March 2026 01:17:06 +0000 (0:00:00.274) 0:02:31.732 ********** 2026-03-09 01:17:09.980755 | orchestrator | =============================================================================== 2026-03-09 01:17:09.980763 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.76s 2026-03-09 01:17:09.980770 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.67s 2026-03-09 01:17:09.980776 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 28.04s 2026-03-09 01:17:09.980783 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.88s 2026-03-09 01:17:09.980790 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.62s 2026-03-09 01:17:09.980796 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.57s 2026-03-09 01:17:09.980803 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.55s 2026-03-09 01:17:09.980810 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.01s 2026-03-09 01:17:09.980817 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.90s 2026-03-09 01:17:09.980823 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.88s 2026-03-09 01:17:09.980830 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.86s 2026-03-09 01:17:09.980837 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.31s 2026-03-09 01:17:09.980844 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.30s 2026-03-09 01:17:09.980850 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.28s 2026-03-09 01:17:09.980857 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.22s 2026-03-09 01:17:09.980864 | orchestrator | grafana : Copying over extra configuration file ------------------------- 1.13s 2026-03-09 01:17:09.980870 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.06s 2026-03-09 01:17:09.980877 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.91s 2026-03-09 01:17:09.980884 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.82s 2026-03-09 01:17:09.980891 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.79s 2026-03-09 01:17:09.980898 | orchestrator | 2026-03-09 01:17:09 | INFO  | Task 60f2188d-b5cb-42ed-b53f-f170e3d6524a is in state SUCCESS 2026-03-09 01:17:09.980905 | orchestrator | 2026-03-09 01:17:09.980912 | orchestrator | 2026-03-09 01:17:09.980919 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:17:09.980925 | orchestrator | 2026-03-09 01:17:09.980932 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-09 01:17:09.980939 | orchestrator | Monday 09 March 2026 01:06:16 +0000 (0:00:00.682) 0:00:00.682 ********** 2026-03-09 01:17:09.980946 | orchestrator | changed: [testbed-manager] 2026-03-09 01:17:09.980952 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.980959 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:09.980966 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:09.980973 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:09.980983 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:09.981097 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:09.981108 | orchestrator | 2026-03-09 01:17:09.981116 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:17:09.981131 | orchestrator | Monday 09 March 2026 01:06:18 +0000 (0:00:01.629) 0:00:02.312 ********** 2026-03-09 01:17:09.981139 | orchestrator | changed: [testbed-manager] 2026-03-09 01:17:09.981147 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.981154 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:09.981161 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:09.981167 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:09.981174 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:09.981181 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:09.981188 | orchestrator | 2026-03-09 01:17:09.981194 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:17:09.981206 | orchestrator | Monday 09 March 2026 01:06:19 +0000 (0:00:01.145) 0:00:03.457 ********** 2026-03-09 01:17:09.981213 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-09 01:17:09.981220 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-09 01:17:09.981227 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-09 01:17:09.981234 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-09 01:17:09.981241 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-09 01:17:09.981248 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-09 01:17:09.981255 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-09 01:17:09.981262 | orchestrator | 2026-03-09 01:17:09.981268 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-09 01:17:09.981275 | orchestrator | 2026-03-09 01:17:09.981606 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-09 01:17:09.981617 | orchestrator | Monday 09 March 2026 01:06:20 +0000 (0:00:01.294) 0:00:04.751 ********** 2026-03-09 01:17:09.981624 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:09.981631 | orchestrator | 2026-03-09 01:17:09.981638 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-09 01:17:09.981645 | orchestrator | Monday 09 March 2026 01:06:21 +0000 (0:00:01.273) 0:00:06.024 ********** 2026-03-09 01:17:09.981652 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-09 01:17:09.981659 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-09 01:17:09.981666 | orchestrator | 2026-03-09 01:17:09.981672 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-09 01:17:09.981679 | orchestrator | Monday 09 March 2026 01:06:26 +0000 (0:00:04.817) 0:00:10.842 ********** 2026-03-09 01:17:09.981686 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 01:17:09.981712 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 01:17:09.981720 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.981727 | orchestrator | 2026-03-09 01:17:09.981733 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-09 01:17:09.981740 | orchestrator | Monday 09 March 2026 01:06:31 +0000 (0:00:04.685) 0:00:15.528 ********** 2026-03-09 01:17:09.981747 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.981754 | orchestrator | 2026-03-09 01:17:09.981760 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-09 01:17:09.981767 | orchestrator | Monday 09 March 2026 01:06:33 +0000 (0:00:01.849) 0:00:17.377 ********** 2026-03-09 01:17:09.981774 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.981780 | orchestrator | 2026-03-09 01:17:09.981787 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-09 01:17:09.981794 | orchestrator | Monday 09 March 2026 01:06:35 +0000 (0:00:02.217) 0:00:19.594 ********** 2026-03-09 01:17:09.981800 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.981807 | orchestrator | 2026-03-09 01:17:09.981814 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-09 01:17:09.981821 | orchestrator | Monday 09 March 2026 01:06:39 +0000 (0:00:04.193) 0:00:23.787 ********** 2026-03-09 01:17:09.981834 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.981841 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.981848 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.981854 | orchestrator | 2026-03-09 01:17:09.981861 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-09 01:17:09.981868 | orchestrator | Monday 09 March 2026 01:06:40 +0000 (0:00:00.728) 0:00:24.516 ********** 2026-03-09 01:17:09.981875 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:09.981881 | orchestrator | 2026-03-09 01:17:09.981918 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-09 01:17:09.981927 | orchestrator | Monday 09 March 2026 01:07:15 +0000 (0:00:35.332) 0:00:59.849 ********** 2026-03-09 01:17:09.981935 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.981942 | orchestrator | 2026-03-09 01:17:09.981949 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-09 01:17:09.981956 | orchestrator | Monday 09 March 2026 01:07:32 +0000 (0:00:17.179) 0:01:17.028 ********** 2026-03-09 01:17:09.982314 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:09.982386 | orchestrator | 2026-03-09 01:17:09.982412 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-09 01:17:09.982419 | orchestrator | Monday 09 March 2026 01:07:48 +0000 (0:00:15.810) 0:01:32.839 ********** 2026-03-09 01:17:09.982426 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:09.982433 | orchestrator | 2026-03-09 01:17:09.982440 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-09 01:17:09.982447 | orchestrator | Monday 09 March 2026 01:07:50 +0000 (0:00:01.265) 0:01:34.105 ********** 2026-03-09 01:17:09.982454 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.982460 | orchestrator | 2026-03-09 01:17:09.982467 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-09 01:17:09.982474 | orchestrator | Monday 09 March 2026 01:07:50 +0000 (0:00:00.551) 0:01:34.657 ********** 2026-03-09 01:17:09.982481 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:09.982488 | orchestrator | 2026-03-09 01:17:09.982495 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-09 01:17:09.982502 | orchestrator | Monday 09 March 2026 01:07:51 +0000 (0:00:00.535) 0:01:35.192 ********** 2026-03-09 01:17:09.982509 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:09.982515 | orchestrator | 2026-03-09 01:17:09.982522 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-09 01:17:09.982529 | orchestrator | Monday 09 March 2026 01:08:11 +0000 (0:00:20.090) 0:01:55.283 ********** 2026-03-09 01:17:09.982535 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.982542 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.982549 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.982556 | orchestrator | 2026-03-09 01:17:09.982569 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-09 01:17:09.982576 | orchestrator | 2026-03-09 01:17:09.982583 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-09 01:17:09.982595 | orchestrator | Monday 09 March 2026 01:08:11 +0000 (0:00:00.378) 0:01:55.661 ********** 2026-03-09 01:17:09.982605 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:09.982615 | orchestrator | 2026-03-09 01:17:09.982626 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-09 01:17:09.982636 | orchestrator | Monday 09 March 2026 01:08:12 +0000 (0:00:00.802) 0:01:56.463 ********** 2026-03-09 01:17:09.982646 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.982657 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.982668 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.982679 | orchestrator | 2026-03-09 01:17:09.982690 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-09 01:17:09.982701 | orchestrator | Monday 09 March 2026 01:08:14 +0000 (0:00:02.517) 0:01:58.981 ********** 2026-03-09 01:17:09.982725 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.982736 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.982748 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.982759 | orchestrator | 2026-03-09 01:17:09.983020 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-09 01:17:09.983032 | orchestrator | Monday 09 March 2026 01:08:17 +0000 (0:00:02.224) 0:02:01.206 ********** 2026-03-09 01:17:09.983039 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.983046 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.983053 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.983060 | orchestrator | 2026-03-09 01:17:09.983066 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-09 01:17:09.983073 | orchestrator | Monday 09 March 2026 01:08:17 +0000 (0:00:00.440) 0:02:01.646 ********** 2026-03-09 01:17:09.983080 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-09 01:17:09.983152 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.983162 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-09 01:17:09.983169 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.983176 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-09 01:17:09.983183 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-09 01:17:09.983190 | orchestrator | 2026-03-09 01:17:09.983197 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-09 01:17:09.983204 | orchestrator | Monday 09 March 2026 01:08:28 +0000 (0:00:11.335) 0:02:12.982 ********** 2026-03-09 01:17:09.983211 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.983217 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.983224 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.983231 | orchestrator | 2026-03-09 01:17:09.983237 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-09 01:17:09.983244 | orchestrator | Monday 09 March 2026 01:08:30 +0000 (0:00:01.148) 0:02:14.130 ********** 2026-03-09 01:17:09.983251 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-09 01:17:09.983258 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.983264 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-09 01:17:09.983271 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.983278 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-09 01:17:09.983285 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.983291 | orchestrator | 2026-03-09 01:17:09.983298 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-09 01:17:09.983305 | orchestrator | Monday 09 March 2026 01:08:31 +0000 (0:00:01.901) 0:02:16.032 ********** 2026-03-09 01:17:09.983311 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.983318 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.983325 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.983331 | orchestrator | 2026-03-09 01:17:09.983338 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-09 01:17:09.983345 | orchestrator | Monday 09 March 2026 01:08:32 +0000 (0:00:01.013) 0:02:17.045 ********** 2026-03-09 01:17:09.983352 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.983358 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.983365 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.983372 | orchestrator | 2026-03-09 01:17:09.983379 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-09 01:17:09.983385 | orchestrator | Monday 09 March 2026 01:08:34 +0000 (0:00:01.522) 0:02:18.567 ********** 2026-03-09 01:17:09.983409 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.983416 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.983422 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.983429 | orchestrator | 2026-03-09 01:17:09.983436 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-09 01:17:09.983452 | orchestrator | Monday 09 March 2026 01:08:37 +0000 (0:00:02.783) 0:02:21.351 ********** 2026-03-09 01:17:09.983459 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.983466 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.983473 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:09.983480 | orchestrator | 2026-03-09 01:17:09.983487 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-09 01:17:09.983494 | orchestrator | Monday 09 March 2026 01:09:03 +0000 (0:00:26.335) 0:02:47.687 ********** 2026-03-09 01:17:09.983500 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.983507 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.983514 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:09.983521 | orchestrator | 2026-03-09 01:17:09.983528 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-09 01:17:09.983535 | orchestrator | Monday 09 March 2026 01:09:18 +0000 (0:00:14.566) 0:03:02.253 ********** 2026-03-09 01:17:09.983541 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:09.983548 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.983555 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.983562 | orchestrator | 2026-03-09 01:17:09.983568 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-09 01:17:09.983575 | orchestrator | Monday 09 March 2026 01:09:19 +0000 (0:00:00.947) 0:03:03.201 ********** 2026-03-09 01:17:09.983592 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.983599 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.983609 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.983621 | orchestrator | 2026-03-09 01:17:09.983632 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-09 01:17:09.983644 | orchestrator | Monday 09 March 2026 01:09:33 +0000 (0:00:14.040) 0:03:17.241 ********** 2026-03-09 01:17:09.983654 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.983665 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.983676 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.983688 | orchestrator | 2026-03-09 01:17:09.983699 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-09 01:17:09.983711 | orchestrator | Monday 09 March 2026 01:09:34 +0000 (0:00:01.184) 0:03:18.426 ********** 2026-03-09 01:17:09.983723 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.983733 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.983745 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.983754 | orchestrator | 2026-03-09 01:17:09.983760 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-09 01:17:09.983767 | orchestrator | 2026-03-09 01:17:09.983775 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-09 01:17:09.983783 | orchestrator | Monday 09 March 2026 01:09:34 +0000 (0:00:00.467) 0:03:18.893 ********** 2026-03-09 01:17:09.983791 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:09.983800 | orchestrator | 2026-03-09 01:17:09.983808 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-09 01:17:09.983816 | orchestrator | Monday 09 March 2026 01:09:35 +0000 (0:00:00.535) 0:03:19.429 ********** 2026-03-09 01:17:09.983824 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-09 01:17:09.983833 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-09 01:17:09.983841 | orchestrator | 2026-03-09 01:17:09.983919 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-09 01:17:09.983930 | orchestrator | Monday 09 March 2026 01:09:39 +0000 (0:00:03.758) 0:03:23.187 ********** 2026-03-09 01:17:09.983938 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-09 01:17:09.983947 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-09 01:17:09.983963 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-09 01:17:09.983971 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-09 01:17:09.983979 | orchestrator | 2026-03-09 01:17:09.983987 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-09 01:17:09.983995 | orchestrator | Monday 09 March 2026 01:09:46 +0000 (0:00:06.938) 0:03:30.126 ********** 2026-03-09 01:17:09.984004 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:17:09.984011 | orchestrator | 2026-03-09 01:17:09.984019 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-09 01:17:09.984027 | orchestrator | Monday 09 March 2026 01:09:49 +0000 (0:00:03.645) 0:03:33.772 ********** 2026-03-09 01:17:09.984035 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:17:09.984042 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-09 01:17:09.984050 | orchestrator | 2026-03-09 01:17:09.984058 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-09 01:17:09.984066 | orchestrator | Monday 09 March 2026 01:09:54 +0000 (0:00:04.500) 0:03:38.273 ********** 2026-03-09 01:17:09.984074 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:17:09.984082 | orchestrator | 2026-03-09 01:17:09.984090 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-09 01:17:09.984098 | orchestrator | Monday 09 March 2026 01:09:57 +0000 (0:00:03.419) 0:03:41.692 ********** 2026-03-09 01:17:09.984105 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-09 01:17:09.984113 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-09 01:17:09.984121 | orchestrator | 2026-03-09 01:17:09.984129 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-09 01:17:09.984137 | orchestrator | Monday 09 March 2026 01:10:05 +0000 (0:00:08.318) 0:03:50.010 ********** 2026-03-09 01:17:09.984153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:09.984220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:09.984237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:09.984246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.984255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.984266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.984273 | orchestrator | 2026-03-09 01:17:09.984280 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-09 01:17:09.984287 | orchestrator | Monday 09 March 2026 01:10:07 +0000 (0:00:01.811) 0:03:51.822 ********** 2026-03-09 01:17:09.984294 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.984301 | orchestrator | 2026-03-09 01:17:09.984307 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-09 01:17:09.984319 | orchestrator | Monday 09 March 2026 01:10:08 +0000 (0:00:00.324) 0:03:52.147 ********** 2026-03-09 01:17:09.984326 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.984333 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.984339 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.984346 | orchestrator | 2026-03-09 01:17:09.984353 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-09 01:17:09.984359 | orchestrator | Monday 09 March 2026 01:10:08 +0000 (0:00:00.552) 0:03:52.700 ********** 2026-03-09 01:17:09.984366 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:17:09.984373 | orchestrator | 2026-03-09 01:17:09.984380 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-09 01:17:09.984386 | orchestrator | Monday 09 March 2026 01:10:09 +0000 (0:00:01.335) 0:03:54.035 ********** 2026-03-09 01:17:09.984487 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.984497 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.984503 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.984510 | orchestrator | 2026-03-09 01:17:09.984517 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-09 01:17:09.984524 | orchestrator | Monday 09 March 2026 01:10:10 +0000 (0:00:00.340) 0:03:54.375 ********** 2026-03-09 01:17:09.984530 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:09.984537 | orchestrator | 2026-03-09 01:17:09.984544 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-09 01:17:09.984551 | orchestrator | Monday 09 March 2026 01:10:11 +0000 (0:00:01.300) 0:03:55.675 ********** 2026-03-09 01:17:09.984558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:09.984571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:09.984630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:09.984640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.984648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.984655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.984663 | orchestrator | 2026-03-09 01:17:09.984670 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-09 01:17:09.984677 | orchestrator | Monday 09 March 2026 01:10:15 +0000 (0:00:04.335) 0:04:00.010 ********** 2026-03-09 01:17:09.984688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:09.984703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.984711 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.984743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:09.984757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.984765 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.984776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:09.984790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.984797 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.984804 | orchestrator | 2026-03-09 01:17:09.984811 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-09 01:17:09.984817 | orchestrator | Monday 09 March 2026 01:10:18 +0000 (0:00:02.167) 0:04:02.178 ********** 2026-03-09 01:17:09.984846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:09.984855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.984862 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.984873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:09.984886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.984893 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.984933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:09.984942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.984949 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.984956 | orchestrator | 2026-03-09 01:17:09.984963 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-09 01:17:09.984969 | orchestrator | Monday 09 March 2026 01:10:20 +0000 (0:00:02.608) 0:04:04.786 ********** 2026-03-09 01:17:09.984977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:09.984996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:09.985024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.985033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:09.985040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.985056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.985063 | orchestrator | 2026-03-09 01:17:09.985070 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-09 01:17:09.985076 | orchestrator | Monday 09 March 2026 01:10:25 +0000 (0:00:05.012) 0:04:09.800 ********** 2026-03-09 01:17:09.985103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:09.985113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:09.985123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:09.985140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.985149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.985177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.985186 | orchestrator | 2026-03-09 01:17:09.985194 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-09 01:17:09.985202 | orchestrator | Monday 09 March 2026 01:10:41 +0000 (0:00:15.973) 0:04:25.773 ********** 2026-03-09 01:17:09.985210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:09.985223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.985231 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.985244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:09.985275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.985284 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.985292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:09.985306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.985314 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.985322 | orchestrator | 2026-03-09 01:17:09.985329 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-09 01:17:09.985337 | orchestrator | Monday 09 March 2026 01:10:44 +0000 (0:00:02.749) 0:04:28.525 ********** 2026-03-09 01:17:09.985346 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.985353 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:09.985361 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:09.985372 | orchestrator | 2026-03-09 01:17:09.985383 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-09 01:17:09.985414 | orchestrator | Monday 09 March 2026 01:10:47 +0000 (0:00:03.155) 0:04:31.681 ********** 2026-03-09 01:17:09.985424 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.985434 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.985445 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.985455 | orchestrator | 2026-03-09 01:17:09.985472 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-09 01:17:09.985483 | orchestrator | Monday 09 March 2026 01:10:48 +0000 (0:00:00.467) 0:04:32.149 ********** 2026-03-09 01:17:09.985531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:09.985547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:09.985568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.985599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:09.985608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.985639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.985647 | orchestrator | 2026-03-09 01:17:09.985655 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-09 01:17:09.985661 | orchestrator | Monday 09 March 2026 01:10:52 +0000 (0:00:04.451) 0:04:36.600 ********** 2026-03-09 01:17:09.985668 | orchestrator | 2026-03-09 01:17:09.985675 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-09 01:17:09.985686 | orchestrator | Monday 09 March 2026 01:10:53 +0000 (0:00:00.646) 0:04:37.247 ********** 2026-03-09 01:17:09.985693 | orchestrator | 2026-03-09 01:17:09.985700 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-09 01:17:09.985707 | orchestrator | Monday 09 March 2026 01:10:53 +0000 (0:00:00.333) 0:04:37.580 ********** 2026-03-09 01:17:09.985713 | orchestrator | 2026-03-09 01:17:09.985720 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-09 01:17:09.985728 | orchestrator | Monday 09 March 2026 01:10:53 +0000 (0:00:00.325) 0:04:37.906 ********** 2026-03-09 01:17:09.985739 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.985750 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:09.985761 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:09.985773 | orchestrator | 2026-03-09 01:17:09.985784 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-09 01:17:09.985794 | orchestrator | Monday 09 March 2026 01:11:15 +0000 (0:00:21.763) 0:04:59.669 ********** 2026-03-09 01:17:09.985801 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:09.985807 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:09.985814 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.985821 | orchestrator | 2026-03-09 01:17:09.985827 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-09 01:17:09.985834 | orchestrator | 2026-03-09 01:17:09.985841 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-09 01:17:09.985848 | orchestrator | Monday 09 March 2026 01:11:29 +0000 (0:00:13.950) 0:05:13.620 ********** 2026-03-09 01:17:09.985855 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:09.985862 | orchestrator | 2026-03-09 01:17:09.985869 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-09 01:17:09.985875 | orchestrator | Monday 09 March 2026 01:11:31 +0000 (0:00:02.156) 0:05:15.776 ********** 2026-03-09 01:17:09.985882 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.985889 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.985895 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.985902 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.985909 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.985916 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.985922 | orchestrator | 2026-03-09 01:17:09.985929 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-09 01:17:09.985936 | orchestrator | Monday 09 March 2026 01:11:32 +0000 (0:00:00.712) 0:05:16.488 ********** 2026-03-09 01:17:09.985943 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.985949 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.985956 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.985963 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:17:09.985970 | orchestrator | 2026-03-09 01:17:09.985976 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-09 01:17:09.985983 | orchestrator | Monday 09 March 2026 01:11:34 +0000 (0:00:02.543) 0:05:19.031 ********** 2026-03-09 01:17:09.985990 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-09 01:17:09.985997 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-09 01:17:09.986004 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-09 01:17:09.986010 | orchestrator | 2026-03-09 01:17:09.986053 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-09 01:17:09.986062 | orchestrator | Monday 09 March 2026 01:11:36 +0000 (0:00:01.469) 0:05:20.502 ********** 2026-03-09 01:17:09.986069 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-09 01:17:09.986076 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-09 01:17:09.986083 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-09 01:17:09.986096 | orchestrator | 2026-03-09 01:17:09.986102 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-09 01:17:09.986109 | orchestrator | Monday 09 March 2026 01:11:38 +0000 (0:00:02.076) 0:05:22.579 ********** 2026-03-09 01:17:09.986116 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-09 01:17:09.986123 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.986129 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-09 01:17:09.986136 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.986143 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-09 01:17:09.986149 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.986156 | orchestrator | 2026-03-09 01:17:09.986163 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-09 01:17:09.986170 | orchestrator | Monday 09 March 2026 01:11:39 +0000 (0:00:00.724) 0:05:23.304 ********** 2026-03-09 01:17:09.986176 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 01:17:09.986183 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 01:17:09.986190 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.986197 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 01:17:09.986204 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 01:17:09.986245 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.986256 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-09 01:17:09.986272 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-09 01:17:09.986286 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-09 01:17:09.986296 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 01:17:09.986307 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 01:17:09.986317 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.986328 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-09 01:17:09.986338 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-09 01:17:09.986348 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-09 01:17:09.986357 | orchestrator | 2026-03-09 01:17:09.986367 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-09 01:17:09.986377 | orchestrator | Monday 09 March 2026 01:11:41 +0000 (0:00:02.665) 0:05:25.969 ********** 2026-03-09 01:17:09.986388 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.986460 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:09.986472 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.986483 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.986493 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:09.986505 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:09.986513 | orchestrator | 2026-03-09 01:17:09.986520 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-09 01:17:09.986527 | orchestrator | Monday 09 March 2026 01:11:43 +0000 (0:00:01.803) 0:05:27.772 ********** 2026-03-09 01:17:09.986533 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.986540 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.986547 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.986554 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:09.986560 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:09.986567 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:09.986573 | orchestrator | 2026-03-09 01:17:09.986580 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-09 01:17:09.986587 | orchestrator | Monday 09 March 2026 01:11:46 +0000 (0:00:02.589) 0:05:30.361 ********** 2026-03-09 01:17:09.986603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986620 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986628 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986667 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986684 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986703 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986710 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986771 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986822 | orchestrator | 2026-03-09 01:17:09.986829 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-09 01:17:09.986836 | orchestrator | Monday 09 March 2026 01:11:51 +0000 (0:00:05.243) 0:05:35.606 ********** 2026-03-09 01:17:09.986868 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:09.986880 | orchestrator | 2026-03-09 01:17:09.986891 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-09 01:17:09.986902 | orchestrator | Monday 09 March 2026 01:11:54 +0000 (0:00:03.011) 0:05:38.617 ********** 2026-03-09 01:17:09.986913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986934 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986946 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:09.986962 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:09.987006 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:09.987017 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.987029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:09.987036 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:09.987042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:09.987052 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.987080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:09.987092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.987102 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.987121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.987131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.987142 | orchestrator | 2026-03-09 01:17:09.987152 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-09 01:17:09.987161 | orchestrator | Monday 09 March 2026 01:12:00 +0000 (0:00:06.322) 0:05:44.940 ********** 2026-03-09 01:17:09.987176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:09.987218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:09.987231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.987251 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.987261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:09.987273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:09.987289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.987299 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.987309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:09.987350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.987372 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.987381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:09.987413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:09.987423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.987433 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.987449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:09.987459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.987470 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.987509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:09.987533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.987545 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.987555 | orchestrator | 2026-03-09 01:17:09.987565 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-09 01:17:09.987576 | orchestrator | Monday 09 March 2026 01:12:04 +0000 (0:00:03.879) 0:05:48.819 ********** 2026-03-09 01:17:09.987587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:09.987604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:09.987615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.987626 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.987667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:09.987689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:09.987701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.987712 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.987722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:09.987738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:09.987777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.987798 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.987808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:09.987819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.987829 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.987839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:09.987848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.987858 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.987873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:09.987884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.987902 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.987914 | orchestrator | 2026-03-09 01:17:09.987925 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-09 01:17:09.987936 | orchestrator | Monday 09 March 2026 01:12:10 +0000 (0:00:06.165) 0:05:54.985 ********** 2026-03-09 01:17:09.987945 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.987955 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.987993 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.988004 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:17:09.988014 | orchestrator | 2026-03-09 01:17:09.988023 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-09 01:17:09.988033 | orchestrator | Monday 09 March 2026 01:12:12 +0000 (0:00:01.908) 0:05:56.894 ********** 2026-03-09 01:17:09.988043 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 01:17:09.988052 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 01:17:09.988061 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 01:17:09.988071 | orchestrator | 2026-03-09 01:17:09.988080 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-09 01:17:09.988090 | orchestrator | Monday 09 March 2026 01:12:14 +0000 (0:00:02.160) 0:05:59.054 ********** 2026-03-09 01:17:09.988100 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 01:17:09.988109 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 01:17:09.988120 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 01:17:09.988130 | orchestrator | 2026-03-09 01:17:09.988139 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-09 01:17:09.988149 | orchestrator | Monday 09 March 2026 01:12:18 +0000 (0:00:03.739) 0:06:02.801 ********** 2026-03-09 01:17:09.988158 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:17:09.988169 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:17:09.988179 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:17:09.988190 | orchestrator | 2026-03-09 01:17:09.988200 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-09 01:17:09.988211 | orchestrator | Monday 09 March 2026 01:12:20 +0000 (0:00:01.586) 0:06:04.388 ********** 2026-03-09 01:17:09.988222 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:17:09.988232 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:17:09.988242 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:17:09.988252 | orchestrator | 2026-03-09 01:17:09.988262 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-09 01:17:09.988272 | orchestrator | Monday 09 March 2026 01:12:22 +0000 (0:00:02.334) 0:06:06.722 ********** 2026-03-09 01:17:09.988282 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-09 01:17:09.988293 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-09 01:17:09.988304 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-09 01:17:09.988314 | orchestrator | 2026-03-09 01:17:09.988325 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-09 01:17:09.988335 | orchestrator | Monday 09 March 2026 01:12:24 +0000 (0:00:01.679) 0:06:08.401 ********** 2026-03-09 01:17:09.988345 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-09 01:17:09.988355 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-09 01:17:09.988366 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-09 01:17:09.988372 | orchestrator | 2026-03-09 01:17:09.988379 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-09 01:17:09.988424 | orchestrator | Monday 09 March 2026 01:12:26 +0000 (0:00:02.149) 0:06:10.551 ********** 2026-03-09 01:17:09.988432 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-09 01:17:09.988438 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-09 01:17:09.988445 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-09 01:17:09.988451 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-09 01:17:09.988457 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-09 01:17:09.988463 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-09 01:17:09.988470 | orchestrator | 2026-03-09 01:17:09.988476 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-09 01:17:09.988482 | orchestrator | Monday 09 March 2026 01:12:32 +0000 (0:00:06.448) 0:06:16.999 ********** 2026-03-09 01:17:09.988488 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.988495 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.988501 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.988507 | orchestrator | 2026-03-09 01:17:09.988519 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-09 01:17:09.988526 | orchestrator | Monday 09 March 2026 01:12:33 +0000 (0:00:00.690) 0:06:17.690 ********** 2026-03-09 01:17:09.988532 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.988538 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.988545 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.988551 | orchestrator | 2026-03-09 01:17:09.988561 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-09 01:17:09.988571 | orchestrator | Monday 09 March 2026 01:12:33 +0000 (0:00:00.355) 0:06:18.045 ********** 2026-03-09 01:17:09.988579 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:09.988588 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:09.988596 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:09.988605 | orchestrator | 2026-03-09 01:17:09.988619 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-09 01:17:09.988634 | orchestrator | Monday 09 March 2026 01:12:36 +0000 (0:00:02.268) 0:06:20.314 ********** 2026-03-09 01:17:09.988643 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-09 01:17:09.988655 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-09 01:17:09.988665 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-09 01:17:09.988676 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-09 01:17:09.988728 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-09 01:17:09.988736 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-09 01:17:09.988743 | orchestrator | 2026-03-09 01:17:09.988753 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-09 01:17:09.988763 | orchestrator | Monday 09 March 2026 01:12:40 +0000 (0:00:04.710) 0:06:25.025 ********** 2026-03-09 01:17:09.988773 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 01:17:09.988783 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 01:17:09.988794 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 01:17:09.988803 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 01:17:09.988812 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:09.988822 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 01:17:09.988832 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:09.988851 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 01:17:09.988861 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:09.988871 | orchestrator | 2026-03-09 01:17:09.988881 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-09 01:17:09.988892 | orchestrator | Monday 09 March 2026 01:12:44 +0000 (0:00:03.771) 0:06:28.796 ********** 2026-03-09 01:17:09.988903 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.988913 | orchestrator | 2026-03-09 01:17:09.988923 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-09 01:17:09.988934 | orchestrator | Monday 09 March 2026 01:12:44 +0000 (0:00:00.141) 0:06:28.938 ********** 2026-03-09 01:17:09.988944 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.988954 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.988963 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.989222 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.989247 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.989256 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.989266 | orchestrator | 2026-03-09 01:17:09.989276 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-09 01:17:09.989285 | orchestrator | Monday 09 March 2026 01:12:45 +0000 (0:00:00.631) 0:06:29.570 ********** 2026-03-09 01:17:09.989295 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 01:17:09.989306 | orchestrator | 2026-03-09 01:17:09.989318 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-09 01:17:09.989329 | orchestrator | Monday 09 March 2026 01:12:46 +0000 (0:00:00.796) 0:06:30.366 ********** 2026-03-09 01:17:09.989340 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.989351 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.989361 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.989371 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.989382 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.989455 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.989467 | orchestrator | 2026-03-09 01:17:09.989478 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-09 01:17:09.989490 | orchestrator | Monday 09 March 2026 01:12:47 +0000 (0:00:00.927) 0:06:31.293 ********** 2026-03-09 01:17:09.989513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989649 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989678 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989731 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989744 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989782 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989792 | orchestrator | 2026-03-09 01:17:09.989801 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-09 01:17:09.989810 | orchestrator | Monday 09 March 2026 01:12:52 +0000 (0:00:04.810) 0:06:36.104 ********** 2026-03-09 01:17:09.989819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:09.989829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:09.989842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:09.989851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:09.989874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:09.989884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:09.989893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989946 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.989987 | orchestrator | 2026-03-09 01:17:09.989996 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-09 01:17:09.990002 | orchestrator | Monday 09 March 2026 01:12:59 +0000 (0:00:07.423) 0:06:43.527 ********** 2026-03-09 01:17:09.990012 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.990050 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.990056 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.990062 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.990067 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.990073 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.990078 | orchestrator | 2026-03-09 01:17:09.990084 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-09 01:17:09.990090 | orchestrator | Monday 09 March 2026 01:13:01 +0000 (0:00:01.542) 0:06:45.070 ********** 2026-03-09 01:17:09.990095 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-09 01:17:09.990101 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-09 01:17:09.990107 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-09 01:17:09.990112 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-09 01:17:09.990118 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-09 01:17:09.990123 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-09 01:17:09.990129 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.990135 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-09 01:17:09.990145 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-09 01:17:09.990151 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.990157 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-09 01:17:09.990163 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.990168 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-09 01:17:09.990174 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-09 01:17:09.990180 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-09 01:17:09.990185 | orchestrator | 2026-03-09 01:17:09.990191 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-09 01:17:09.990196 | orchestrator | Monday 09 March 2026 01:13:05 +0000 (0:00:04.654) 0:06:49.724 ********** 2026-03-09 01:17:09.990202 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.990207 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.990213 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.990219 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.990224 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.990229 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.990235 | orchestrator | 2026-03-09 01:17:09.990241 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-09 01:17:09.990246 | orchestrator | Monday 09 March 2026 01:13:06 +0000 (0:00:00.651) 0:06:50.376 ********** 2026-03-09 01:17:09.990252 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-09 01:17:09.990258 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-09 01:17:09.990263 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-09 01:17:09.990269 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-09 01:17:09.990275 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-09 01:17:09.990286 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-09 01:17:09.990291 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-09 01:17:09.990297 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-09 01:17:09.990303 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-09 01:17:09.990312 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-09 01:17:09.990321 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.990331 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-09 01:17:09.990340 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.990349 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:17:09.990358 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-09 01:17:09.990367 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.990381 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:17:09.990409 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:17:09.990418 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:17:09.990426 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:17:09.990435 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:17:09.990443 | orchestrator | 2026-03-09 01:17:09.990451 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-09 01:17:09.990459 | orchestrator | Monday 09 March 2026 01:13:13 +0000 (0:00:06.742) 0:06:57.119 ********** 2026-03-09 01:17:09.990468 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 01:17:09.990476 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 01:17:09.990484 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 01:17:09.990493 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:17:09.990502 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-09 01:17:09.990512 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:17:09.990529 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:17:09.990538 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-09 01:17:09.990547 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 01:17:09.990555 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-09 01:17:09.990564 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 01:17:09.990573 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-09 01:17:09.990582 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.990592 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:17:09.990601 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 01:17:09.990619 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-09 01:17:09.990628 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.990636 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:17:09.990645 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-09 01:17:09.990654 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.990663 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:17:09.990673 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:17:09.990682 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:17:09.990691 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:17:09.990699 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:17:09.990704 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:17:09.990710 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:17:09.990715 | orchestrator | 2026-03-09 01:17:09.990721 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-09 01:17:09.990726 | orchestrator | Monday 09 March 2026 01:13:20 +0000 (0:00:07.872) 0:07:04.991 ********** 2026-03-09 01:17:09.990732 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.990737 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.990743 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.990748 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.990754 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.990759 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.990765 | orchestrator | 2026-03-09 01:17:09.990770 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-09 01:17:09.990776 | orchestrator | Monday 09 March 2026 01:13:21 +0000 (0:00:00.914) 0:07:05.906 ********** 2026-03-09 01:17:09.990781 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.990786 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.990792 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.990797 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.990803 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.990808 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.990813 | orchestrator | 2026-03-09 01:17:09.990819 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-09 01:17:09.990824 | orchestrator | Monday 09 March 2026 01:13:22 +0000 (0:00:00.690) 0:07:06.597 ********** 2026-03-09 01:17:09.990830 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.990835 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:09.990845 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.990851 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:09.990857 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:09.990862 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.990868 | orchestrator | 2026-03-09 01:17:09.990874 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-09 01:17:09.990879 | orchestrator | Monday 09 March 2026 01:13:25 +0000 (0:00:03.431) 0:07:10.028 ********** 2026-03-09 01:17:09.990886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:09.990904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:09.990914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.990923 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.990932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:09.990942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.990956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:09.990971 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.990980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:09.990994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.991004 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.991014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:09.991023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:09.991037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.991047 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.991057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:09.991083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.991093 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.991102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:09.991112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:09.991122 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.991128 | orchestrator | 2026-03-09 01:17:09.991134 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-09 01:17:09.991139 | orchestrator | Monday 09 March 2026 01:13:28 +0000 (0:00:02.928) 0:07:12.957 ********** 2026-03-09 01:17:09.991145 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-09 01:17:09.991151 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-09 01:17:09.991156 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.991162 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-09 01:17:09.991167 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-09 01:17:09.991173 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-09 01:17:09.991178 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-09 01:17:09.991184 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.991193 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-09 01:17:09.991202 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-09 01:17:09.991211 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.991220 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-09 01:17:09.991229 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-09 01:17:09.991238 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.991253 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.991261 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-09 01:17:09.991271 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-09 01:17:09.991279 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.991288 | orchestrator | 2026-03-09 01:17:09.991297 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-09 01:17:09.991311 | orchestrator | Monday 09 March 2026 01:13:30 +0000 (0:00:01.309) 0:07:14.267 ********** 2026-03-09 01:17:09.991321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:09.991340 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:09.991350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:09.991359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:09.991369 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:09.991442 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:09.991452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:09.991465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:09.991471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:09.991477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.991484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.991498 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.991504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.991515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.991521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:09.991527 | orchestrator | 2026-03-09 01:17:09.991532 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-09 01:17:09.991538 | orchestrator | Monday 09 March 2026 01:13:34 +0000 (0:00:04.333) 0:07:18.601 ********** 2026-03-09 01:17:09.991543 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.991548 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.991553 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.991558 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.991563 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.991567 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.991572 | orchestrator | 2026-03-09 01:17:09.991577 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:17:09.991582 | orchestrator | Monday 09 March 2026 01:13:36 +0000 (0:00:01.534) 0:07:20.136 ********** 2026-03-09 01:17:09.991591 | orchestrator | 2026-03-09 01:17:09.991596 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:17:09.991601 | orchestrator | Monday 09 March 2026 01:13:36 +0000 (0:00:00.181) 0:07:20.317 ********** 2026-03-09 01:17:09.991606 | orchestrator | 2026-03-09 01:17:09.991611 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:17:09.991615 | orchestrator | Monday 09 March 2026 01:13:36 +0000 (0:00:00.176) 0:07:20.494 ********** 2026-03-09 01:17:09.991620 | orchestrator | 2026-03-09 01:17:09.991625 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:17:09.991630 | orchestrator | Monday 09 March 2026 01:13:36 +0000 (0:00:00.144) 0:07:20.639 ********** 2026-03-09 01:17:09.991635 | orchestrator | 2026-03-09 01:17:09.991640 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:17:09.991644 | orchestrator | Monday 09 March 2026 01:13:36 +0000 (0:00:00.149) 0:07:20.788 ********** 2026-03-09 01:17:09.991649 | orchestrator | 2026-03-09 01:17:09.991654 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:17:09.991659 | orchestrator | Monday 09 March 2026 01:13:36 +0000 (0:00:00.136) 0:07:20.925 ********** 2026-03-09 01:17:09.991664 | orchestrator | 2026-03-09 01:17:09.991669 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-09 01:17:09.991674 | orchestrator | Monday 09 March 2026 01:13:37 +0000 (0:00:00.391) 0:07:21.316 ********** 2026-03-09 01:17:09.991678 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.991683 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:09.991688 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:09.991693 | orchestrator | 2026-03-09 01:17:09.991698 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-09 01:17:09.991705 | orchestrator | Monday 09 March 2026 01:13:45 +0000 (0:00:08.243) 0:07:29.560 ********** 2026-03-09 01:17:09.991710 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.991715 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:09.991720 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:09.991725 | orchestrator | 2026-03-09 01:17:09.991730 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-09 01:17:09.991735 | orchestrator | Monday 09 March 2026 01:14:04 +0000 (0:00:19.413) 0:07:48.973 ********** 2026-03-09 01:17:09.991740 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:09.991748 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:09.991755 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:09.991763 | orchestrator | 2026-03-09 01:17:09.991771 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-09 01:17:09.991779 | orchestrator | Monday 09 March 2026 01:14:43 +0000 (0:00:38.243) 0:08:27.217 ********** 2026-03-09 01:17:09.991787 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:09.991794 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:09.991802 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:09.991809 | orchestrator | 2026-03-09 01:17:09.991817 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-09 01:17:09.991824 | orchestrator | Monday 09 March 2026 01:15:20 +0000 (0:00:37.345) 0:09:04.562 ********** 2026-03-09 01:17:09.991831 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:09.991839 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:09.991847 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:09.991855 | orchestrator | 2026-03-09 01:17:09.991862 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-09 01:17:09.991870 | orchestrator | Monday 09 March 2026 01:15:21 +0000 (0:00:00.894) 0:09:05.457 ********** 2026-03-09 01:17:09.991878 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:09.991885 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:09.991893 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:09.991900 | orchestrator | 2026-03-09 01:17:09.991908 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-09 01:17:09.991928 | orchestrator | Monday 09 March 2026 01:15:22 +0000 (0:00:01.009) 0:09:06.467 ********** 2026-03-09 01:17:09.991936 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:09.991944 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:09.991952 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:09.991960 | orchestrator | 2026-03-09 01:17:09.991967 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-09 01:17:09.991976 | orchestrator | Monday 09 March 2026 01:15:50 +0000 (0:00:27.645) 0:09:34.113 ********** 2026-03-09 01:17:09.991984 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.991992 | orchestrator | 2026-03-09 01:17:09.992000 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-09 01:17:09.992008 | orchestrator | Monday 09 March 2026 01:15:50 +0000 (0:00:00.132) 0:09:34.245 ********** 2026-03-09 01:17:09.992015 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.992024 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.992032 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.992039 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.992047 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.992056 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-09 01:17:09.992065 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:17:09.992073 | orchestrator | 2026-03-09 01:17:09.992081 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-09 01:17:09.992089 | orchestrator | Monday 09 March 2026 01:16:13 +0000 (0:00:23.109) 0:09:57.355 ********** 2026-03-09 01:17:09.992097 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.992104 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.992112 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.992120 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.992128 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.992136 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.992143 | orchestrator | 2026-03-09 01:17:09.992148 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-09 01:17:09.992153 | orchestrator | Monday 09 March 2026 01:16:24 +0000 (0:00:10.924) 0:10:08.280 ********** 2026-03-09 01:17:09.992158 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.992162 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.992167 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.992172 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.992177 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.992182 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-03-09 01:17:09.992187 | orchestrator | 2026-03-09 01:17:09.992192 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-09 01:17:09.992197 | orchestrator | Monday 09 March 2026 01:16:29 +0000 (0:00:05.051) 0:10:13.331 ********** 2026-03-09 01:17:09.992202 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:17:09.992206 | orchestrator | 2026-03-09 01:17:09.992211 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-09 01:17:09.992216 | orchestrator | Monday 09 March 2026 01:16:43 +0000 (0:00:14.581) 0:10:27.913 ********** 2026-03-09 01:17:09.992221 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:17:09.992226 | orchestrator | 2026-03-09 01:17:09.992231 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-09 01:17:09.992235 | orchestrator | Monday 09 March 2026 01:16:45 +0000 (0:00:01.432) 0:10:29.345 ********** 2026-03-09 01:17:09.992240 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.992245 | orchestrator | 2026-03-09 01:17:09.992250 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-09 01:17:09.992255 | orchestrator | Monday 09 March 2026 01:16:46 +0000 (0:00:01.347) 0:10:30.692 ********** 2026-03-09 01:17:09.992269 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:17:09.992274 | orchestrator | 2026-03-09 01:17:09.992279 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-09 01:17:09.992289 | orchestrator | Monday 09 March 2026 01:16:59 +0000 (0:00:12.931) 0:10:43.624 ********** 2026-03-09 01:17:09.992294 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:17:09.992299 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:17:09.992304 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:17:09.992308 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:17:09.992313 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:09.992318 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:17:09.992323 | orchestrator | 2026-03-09 01:17:09.992328 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-09 01:17:09.992333 | orchestrator | 2026-03-09 01:17:09.992338 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-09 01:17:09.992343 | orchestrator | Monday 09 March 2026 01:17:01 +0000 (0:00:01.952) 0:10:45.577 ********** 2026-03-09 01:17:09.992348 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:09.992352 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:09.992357 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:09.992362 | orchestrator | 2026-03-09 01:17:09.992367 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-09 01:17:09.992372 | orchestrator | 2026-03-09 01:17:09.992377 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-09 01:17:09.992382 | orchestrator | Monday 09 March 2026 01:17:02 +0000 (0:00:01.212) 0:10:46.789 ********** 2026-03-09 01:17:09.992386 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.992413 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.992422 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.992430 | orchestrator | 2026-03-09 01:17:09.992437 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-09 01:17:09.992445 | orchestrator | 2026-03-09 01:17:09.992452 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-09 01:17:09.992460 | orchestrator | Monday 09 March 2026 01:17:03 +0000 (0:00:00.603) 0:10:47.393 ********** 2026-03-09 01:17:09.992465 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-09 01:17:09.992475 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-09 01:17:09.992481 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-09 01:17:09.992486 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-09 01:17:09.992491 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-09 01:17:09.992495 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-09 01:17:09.992500 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:09.992505 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-09 01:17:09.992510 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-09 01:17:09.992515 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-09 01:17:09.992520 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-09 01:17:09.992525 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-09 01:17:09.992532 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-09 01:17:09.992540 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-09 01:17:09.992547 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-09 01:17:09.992555 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-09 01:17:09.992563 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-09 01:17:09.992571 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-09 01:17:09.992579 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-09 01:17:09.992593 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:09.992601 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-09 01:17:09.992608 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-09 01:17:09.992626 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-09 01:17:09.992634 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-09 01:17:09.992643 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-09 01:17:09.992650 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-09 01:17:09.992658 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:09.992666 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-09 01:17:09.992674 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-09 01:17:09.992682 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-09 01:17:09.992690 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-09 01:17:09.992698 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-09 01:17:09.992706 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-09 01:17:09.992714 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.992722 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.992730 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-09 01:17:09.992737 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-09 01:17:09.992746 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-09 01:17:09.992753 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-09 01:17:09.992757 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-09 01:17:09.992762 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-09 01:17:09.992767 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.992772 | orchestrator | 2026-03-09 01:17:09.992777 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-09 01:17:09.992781 | orchestrator | 2026-03-09 01:17:09.992787 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-09 01:17:09.992791 | orchestrator | Monday 09 March 2026 01:17:04 +0000 (0:00:01.437) 0:10:48.830 ********** 2026-03-09 01:17:09.992813 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-09 01:17:09.992819 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-09 01:17:09.992823 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.992828 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-09 01:17:09.992833 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-09 01:17:09.992838 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.992843 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-09 01:17:09.992848 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-09 01:17:09.992852 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.992857 | orchestrator | 2026-03-09 01:17:09.992862 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-09 01:17:09.992867 | orchestrator | 2026-03-09 01:17:09.992872 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-09 01:17:09.992877 | orchestrator | Monday 09 March 2026 01:17:05 +0000 (0:00:00.864) 0:10:49.695 ********** 2026-03-09 01:17:09.992881 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.992886 | orchestrator | 2026-03-09 01:17:09.992891 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-09 01:17:09.992896 | orchestrator | 2026-03-09 01:17:09.992901 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-09 01:17:09.992905 | orchestrator | Monday 09 March 2026 01:17:06 +0000 (0:00:00.792) 0:10:50.487 ********** 2026-03-09 01:17:09.992910 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:09.992920 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:09.992924 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:09.992929 | orchestrator | 2026-03-09 01:17:09.992934 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:17:09.992941 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:17:09.992956 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-09 01:17:09.992966 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-09 01:17:09.992974 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-09 01:17:09.992982 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-09 01:17:09.992990 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-03-09 01:17:09.992998 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-09 01:17:09.993003 | orchestrator | 2026-03-09 01:17:09.993009 | orchestrator | 2026-03-09 01:17:09.993013 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:17:09.993018 | orchestrator | Monday 09 March 2026 01:17:06 +0000 (0:00:00.515) 0:10:51.003 ********** 2026-03-09 01:17:09.993023 | orchestrator | =============================================================================== 2026-03-09 01:17:09.993028 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 38.24s 2026-03-09 01:17:09.993033 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 37.35s 2026-03-09 01:17:09.993037 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 35.33s 2026-03-09 01:17:09.993042 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 27.65s 2026-03-09 01:17:09.993047 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 26.34s 2026-03-09 01:17:09.993052 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.11s 2026-03-09 01:17:09.993056 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.76s 2026-03-09 01:17:09.993061 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.09s 2026-03-09 01:17:09.993066 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.41s 2026-03-09 01:17:09.993071 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 17.18s 2026-03-09 01:17:09.993075 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 15.97s 2026-03-09 01:17:09.993080 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.81s 2026-03-09 01:17:09.993088 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.58s 2026-03-09 01:17:09.993096 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.57s 2026-03-09 01:17:09.993104 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.04s 2026-03-09 01:17:09.993111 | orchestrator | nova : Restart nova-api container -------------------------------------- 13.95s 2026-03-09 01:17:09.993119 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.93s 2026-03-09 01:17:09.993126 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 11.34s 2026-03-09 01:17:09.993132 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.92s 2026-03-09 01:17:09.993149 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.32s 2026-03-09 01:17:09.993164 | orchestrator | 2026-03-09 01:17:09 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:09.993172 | orchestrator | 2026-03-09 01:17:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:13.036707 | orchestrator | 2026-03-09 01:17:13 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:13.036786 | orchestrator | 2026-03-09 01:17:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:16.075837 | orchestrator | 2026-03-09 01:17:16 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:16.075936 | orchestrator | 2026-03-09 01:17:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:19.122088 | orchestrator | 2026-03-09 01:17:19 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:19.122200 | orchestrator | 2026-03-09 01:17:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:22.157189 | orchestrator | 2026-03-09 01:17:22 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:22.157277 | orchestrator | 2026-03-09 01:17:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:25.202338 | orchestrator | 2026-03-09 01:17:25 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:25.203366 | orchestrator | 2026-03-09 01:17:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:28.253746 | orchestrator | 2026-03-09 01:17:28 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:28.253855 | orchestrator | 2026-03-09 01:17:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:31.292700 | orchestrator | 2026-03-09 01:17:31 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:31.292789 | orchestrator | 2026-03-09 01:17:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:34.335768 | orchestrator | 2026-03-09 01:17:34 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:34.335848 | orchestrator | 2026-03-09 01:17:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:37.378939 | orchestrator | 2026-03-09 01:17:37 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:37.379056 | orchestrator | 2026-03-09 01:17:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:40.431253 | orchestrator | 2026-03-09 01:17:40 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:40.431376 | orchestrator | 2026-03-09 01:17:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:43.476118 | orchestrator | 2026-03-09 01:17:43 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:43.476250 | orchestrator | 2026-03-09 01:17:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:46.516502 | orchestrator | 2026-03-09 01:17:46 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:46.516602 | orchestrator | 2026-03-09 01:17:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:49.554348 | orchestrator | 2026-03-09 01:17:49 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:49.554502 | orchestrator | 2026-03-09 01:17:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:52.590740 | orchestrator | 2026-03-09 01:17:52 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:52.590817 | orchestrator | 2026-03-09 01:17:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:55.634995 | orchestrator | 2026-03-09 01:17:55 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:55.635085 | orchestrator | 2026-03-09 01:17:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:58.677240 | orchestrator | 2026-03-09 01:17:58 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:17:58.677366 | orchestrator | 2026-03-09 01:17:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:01.718629 | orchestrator | 2026-03-09 01:18:01 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:01.719413 | orchestrator | 2026-03-09 01:18:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:04.784876 | orchestrator | 2026-03-09 01:18:04 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:04.785027 | orchestrator | 2026-03-09 01:18:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:07.840925 | orchestrator | 2026-03-09 01:18:07 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:07.841025 | orchestrator | 2026-03-09 01:18:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:10.874923 | orchestrator | 2026-03-09 01:18:10 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:10.875189 | orchestrator | 2026-03-09 01:18:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:13.916062 | orchestrator | 2026-03-09 01:18:13 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:13.916159 | orchestrator | 2026-03-09 01:18:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:16.961809 | orchestrator | 2026-03-09 01:18:16 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:16.961924 | orchestrator | 2026-03-09 01:18:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:20.009617 | orchestrator | 2026-03-09 01:18:20 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:20.010584 | orchestrator | 2026-03-09 01:18:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:23.081986 | orchestrator | 2026-03-09 01:18:23 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:23.083817 | orchestrator | 2026-03-09 01:18:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:26.123142 | orchestrator | 2026-03-09 01:18:26 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:26.123245 | orchestrator | 2026-03-09 01:18:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:29.169925 | orchestrator | 2026-03-09 01:18:29 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:29.170069 | orchestrator | 2026-03-09 01:18:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:32.216444 | orchestrator | 2026-03-09 01:18:32 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:32.216530 | orchestrator | 2026-03-09 01:18:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:35.255617 | orchestrator | 2026-03-09 01:18:35 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:35.255698 | orchestrator | 2026-03-09 01:18:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:38.300025 | orchestrator | 2026-03-09 01:18:38 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:38.300187 | orchestrator | 2026-03-09 01:18:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:41.335761 | orchestrator | 2026-03-09 01:18:41 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:41.335868 | orchestrator | 2026-03-09 01:18:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:44.380672 | orchestrator | 2026-03-09 01:18:44 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:44.380768 | orchestrator | 2026-03-09 01:18:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:47.425993 | orchestrator | 2026-03-09 01:18:47 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:47.426136 | orchestrator | 2026-03-09 01:18:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:50.467644 | orchestrator | 2026-03-09 01:18:50 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:50.467748 | orchestrator | 2026-03-09 01:18:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:53.516722 | orchestrator | 2026-03-09 01:18:53 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:53.516836 | orchestrator | 2026-03-09 01:18:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:56.571469 | orchestrator | 2026-03-09 01:18:56 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:56.571582 | orchestrator | 2026-03-09 01:18:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:59.618109 | orchestrator | 2026-03-09 01:18:59 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:18:59.618219 | orchestrator | 2026-03-09 01:18:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:02.664781 | orchestrator | 2026-03-09 01:19:02 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:02.664903 | orchestrator | 2026-03-09 01:19:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:05.711660 | orchestrator | 2026-03-09 01:19:05 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:05.711731 | orchestrator | 2026-03-09 01:19:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:08.763708 | orchestrator | 2026-03-09 01:19:08 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:08.763828 | orchestrator | 2026-03-09 01:19:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:11.805716 | orchestrator | 2026-03-09 01:19:11 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:11.805803 | orchestrator | 2026-03-09 01:19:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:14.846366 | orchestrator | 2026-03-09 01:19:14 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:14.846554 | orchestrator | 2026-03-09 01:19:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:17.897817 | orchestrator | 2026-03-09 01:19:17 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:17.897913 | orchestrator | 2026-03-09 01:19:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:20.946737 | orchestrator | 2026-03-09 01:19:20 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:20.946872 | orchestrator | 2026-03-09 01:19:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:23.989761 | orchestrator | 2026-03-09 01:19:23 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:23.989885 | orchestrator | 2026-03-09 01:19:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:27.033811 | orchestrator | 2026-03-09 01:19:27 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:27.033944 | orchestrator | 2026-03-09 01:19:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:30.084013 | orchestrator | 2026-03-09 01:19:30 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:30.084156 | orchestrator | 2026-03-09 01:19:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:33.132631 | orchestrator | 2026-03-09 01:19:33 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:33.132749 | orchestrator | 2026-03-09 01:19:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:36.178589 | orchestrator | 2026-03-09 01:19:36 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:36.178744 | orchestrator | 2026-03-09 01:19:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:39.218753 | orchestrator | 2026-03-09 01:19:39 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:39.218854 | orchestrator | 2026-03-09 01:19:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:42.258919 | orchestrator | 2026-03-09 01:19:42 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:42.258995 | orchestrator | 2026-03-09 01:19:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:45.298296 | orchestrator | 2026-03-09 01:19:45 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:45.298455 | orchestrator | 2026-03-09 01:19:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:48.345176 | orchestrator | 2026-03-09 01:19:48 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:48.345307 | orchestrator | 2026-03-09 01:19:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:51.384308 | orchestrator | 2026-03-09 01:19:51 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:51.384455 | orchestrator | 2026-03-09 01:19:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:54.431023 | orchestrator | 2026-03-09 01:19:54 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:54.431121 | orchestrator | 2026-03-09 01:19:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:19:57.483482 | orchestrator | 2026-03-09 01:19:57 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:19:57.483578 | orchestrator | 2026-03-09 01:19:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:20:00.521727 | orchestrator | 2026-03-09 01:20:00 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:20:00.521835 | orchestrator | 2026-03-09 01:20:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:20:03.565824 | orchestrator | 2026-03-09 01:20:03 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state STARTED 2026-03-09 01:20:03.565900 | orchestrator | 2026-03-09 01:20:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:20:06.612726 | orchestrator | 2026-03-09 01:20:06 | INFO  | Task 19a353bd-2ce2-4f40-b728-300c8191f367 is in state SUCCESS 2026-03-09 01:20:06.613540 | orchestrator | 2026-03-09 01:20:06.613569 | orchestrator | 2026-03-09 01:20:06.613580 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:20:06.613590 | orchestrator | 2026-03-09 01:20:06.613599 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:20:06.613640 | orchestrator | Monday 09 March 2026 01:14:59 +0000 (0:00:00.313) 0:00:00.313 ********** 2026-03-09 01:20:06.613654 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:06.613666 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:20:06.613676 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:20:06.613686 | orchestrator | 2026-03-09 01:20:06.613701 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:20:06.613712 | orchestrator | Monday 09 March 2026 01:15:00 +0000 (0:00:00.356) 0:00:00.670 ********** 2026-03-09 01:20:06.613722 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-09 01:20:06.613733 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-09 01:20:06.613744 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-09 01:20:06.613750 | orchestrator | 2026-03-09 01:20:06.613757 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-09 01:20:06.613763 | orchestrator | 2026-03-09 01:20:06.613770 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:20:06.613776 | orchestrator | Monday 09 March 2026 01:15:00 +0000 (0:00:00.497) 0:00:01.167 ********** 2026-03-09 01:20:06.613783 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:20:06.613790 | orchestrator | 2026-03-09 01:20:06.613796 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-09 01:20:06.613803 | orchestrator | Monday 09 March 2026 01:15:01 +0000 (0:00:00.678) 0:00:01.845 ********** 2026-03-09 01:20:06.613810 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-09 01:20:06.613816 | orchestrator | 2026-03-09 01:20:06.613822 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-09 01:20:06.613829 | orchestrator | Monday 09 March 2026 01:15:05 +0000 (0:00:03.553) 0:00:05.398 ********** 2026-03-09 01:20:06.613835 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-09 01:20:06.613841 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-09 01:20:06.613847 | orchestrator | 2026-03-09 01:20:06.613854 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-09 01:20:06.613860 | orchestrator | Monday 09 March 2026 01:15:12 +0000 (0:00:07.739) 0:00:13.138 ********** 2026-03-09 01:20:06.613866 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:20:06.613872 | orchestrator | 2026-03-09 01:20:06.613879 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-09 01:20:06.613885 | orchestrator | Monday 09 March 2026 01:15:16 +0000 (0:00:03.235) 0:00:16.374 ********** 2026-03-09 01:20:06.613960 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:20:06.613968 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-09 01:20:06.613975 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-09 01:20:06.613981 | orchestrator | 2026-03-09 01:20:06.613988 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-09 01:20:06.613994 | orchestrator | Monday 09 March 2026 01:15:24 +0000 (0:00:08.672) 0:00:25.046 ********** 2026-03-09 01:20:06.614001 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:20:06.614007 | orchestrator | 2026-03-09 01:20:06.614013 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-09 01:20:06.614053 | orchestrator | Monday 09 March 2026 01:15:29 +0000 (0:00:04.317) 0:00:29.363 ********** 2026-03-09 01:20:06.614060 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-09 01:20:06.614066 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-09 01:20:06.614073 | orchestrator | 2026-03-09 01:20:06.614079 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-09 01:20:06.614085 | orchestrator | Monday 09 March 2026 01:15:37 +0000 (0:00:08.101) 0:00:37.465 ********** 2026-03-09 01:20:06.614101 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-09 01:20:06.614494 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-09 01:20:06.614512 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-09 01:20:06.614522 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-09 01:20:06.614531 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-09 01:20:06.614541 | orchestrator | 2026-03-09 01:20:06.614550 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:20:06.614560 | orchestrator | Monday 09 March 2026 01:15:54 +0000 (0:00:17.410) 0:00:54.875 ********** 2026-03-09 01:20:06.614570 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:20:06.614580 | orchestrator | 2026-03-09 01:20:06.614589 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-09 01:20:06.614921 | orchestrator | Monday 09 March 2026 01:15:55 +0000 (0:00:00.730) 0:00:55.605 ********** 2026-03-09 01:20:06.614960 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.614972 | orchestrator | 2026-03-09 01:20:06.614983 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-09 01:20:06.614993 | orchestrator | Monday 09 March 2026 01:16:01 +0000 (0:00:05.823) 0:01:01.428 ********** 2026-03-09 01:20:06.615003 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.615013 | orchestrator | 2026-03-09 01:20:06.615024 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-09 01:20:06.615072 | orchestrator | Monday 09 March 2026 01:16:05 +0000 (0:00:04.718) 0:01:06.146 ********** 2026-03-09 01:20:06.615081 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:06.615088 | orchestrator | 2026-03-09 01:20:06.615094 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-09 01:20:06.615101 | orchestrator | Monday 09 March 2026 01:16:09 +0000 (0:00:03.884) 0:01:10.031 ********** 2026-03-09 01:20:06.615107 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-09 01:20:06.615114 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-09 01:20:06.615120 | orchestrator | 2026-03-09 01:20:06.615126 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-09 01:20:06.615133 | orchestrator | Monday 09 March 2026 01:16:21 +0000 (0:00:11.835) 0:01:21.866 ********** 2026-03-09 01:20:06.615139 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-09 01:20:06.615146 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-09 01:20:06.615154 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-09 01:20:06.615162 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-09 01:20:06.615168 | orchestrator | 2026-03-09 01:20:06.615175 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-09 01:20:06.615181 | orchestrator | Monday 09 March 2026 01:16:39 +0000 (0:00:17.707) 0:01:39.574 ********** 2026-03-09 01:20:06.615187 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.615193 | orchestrator | 2026-03-09 01:20:06.615200 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-09 01:20:06.615206 | orchestrator | Monday 09 March 2026 01:16:44 +0000 (0:00:04.992) 0:01:44.566 ********** 2026-03-09 01:20:06.615231 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.615238 | orchestrator | 2026-03-09 01:20:06.615244 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-09 01:20:06.615251 | orchestrator | Monday 09 March 2026 01:16:50 +0000 (0:00:06.059) 0:01:50.626 ********** 2026-03-09 01:20:06.615268 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:20:06.615274 | orchestrator | 2026-03-09 01:20:06.615281 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-09 01:20:06.615287 | orchestrator | Monday 09 March 2026 01:16:50 +0000 (0:00:00.264) 0:01:50.891 ********** 2026-03-09 01:20:06.615294 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:06.615300 | orchestrator | 2026-03-09 01:20:06.615306 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:20:06.615312 | orchestrator | Monday 09 March 2026 01:16:54 +0000 (0:00:04.104) 0:01:54.996 ********** 2026-03-09 01:20:06.615319 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:20:06.615325 | orchestrator | 2026-03-09 01:20:06.615332 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-09 01:20:06.615338 | orchestrator | Monday 09 March 2026 01:16:55 +0000 (0:00:01.193) 0:01:56.189 ********** 2026-03-09 01:20:06.615345 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:06.615351 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.615357 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:06.615364 | orchestrator | 2026-03-09 01:20:06.615370 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-09 01:20:06.615376 | orchestrator | Monday 09 March 2026 01:17:02 +0000 (0:00:06.295) 0:02:02.485 ********** 2026-03-09 01:20:06.615382 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:06.615414 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:06.615424 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.615434 | orchestrator | 2026-03-09 01:20:06.615445 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-09 01:20:06.615455 | orchestrator | Monday 09 March 2026 01:17:07 +0000 (0:00:05.178) 0:02:07.663 ********** 2026-03-09 01:20:06.615464 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.615472 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:06.615481 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:06.615490 | orchestrator | 2026-03-09 01:20:06.615499 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-09 01:20:06.615508 | orchestrator | Monday 09 March 2026 01:17:08 +0000 (0:00:00.857) 0:02:08.520 ********** 2026-03-09 01:20:06.615518 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:06.615528 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:20:06.615537 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:20:06.615548 | orchestrator | 2026-03-09 01:20:06.615558 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-09 01:20:06.615567 | orchestrator | Monday 09 March 2026 01:17:10 +0000 (0:00:02.186) 0:02:10.707 ********** 2026-03-09 01:20:06.615577 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:06.615587 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:06.615597 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.615607 | orchestrator | 2026-03-09 01:20:06.615617 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-09 01:20:06.615628 | orchestrator | Monday 09 March 2026 01:17:11 +0000 (0:00:01.339) 0:02:12.047 ********** 2026-03-09 01:20:06.615645 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.615656 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:06.615666 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:06.615675 | orchestrator | 2026-03-09 01:20:06.615682 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-09 01:20:06.615688 | orchestrator | Monday 09 March 2026 01:17:13 +0000 (0:00:01.353) 0:02:13.400 ********** 2026-03-09 01:20:06.615695 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.615701 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:06.615707 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:06.615713 | orchestrator | 2026-03-09 01:20:06.615754 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-09 01:20:06.615773 | orchestrator | Monday 09 March 2026 01:17:15 +0000 (0:00:02.579) 0:02:15.979 ********** 2026-03-09 01:20:06.615784 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.615794 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:06.615805 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:06.615815 | orchestrator | 2026-03-09 01:20:06.615824 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-09 01:20:06.615833 | orchestrator | Monday 09 March 2026 01:17:17 +0000 (0:00:02.148) 0:02:18.128 ********** 2026-03-09 01:20:06.615843 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:06.615853 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:20:06.615863 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:20:06.615873 | orchestrator | 2026-03-09 01:20:06.615883 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-09 01:20:06.615893 | orchestrator | Monday 09 March 2026 01:17:18 +0000 (0:00:00.726) 0:02:18.855 ********** 2026-03-09 01:20:06.615904 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:20:06.615914 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:06.615924 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:20:06.615934 | orchestrator | 2026-03-09 01:20:06.615943 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:20:06.615954 | orchestrator | Monday 09 March 2026 01:17:21 +0000 (0:00:03.401) 0:02:22.256 ********** 2026-03-09 01:20:06.615964 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:20:06.615974 | orchestrator | 2026-03-09 01:20:06.615984 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-09 01:20:06.615995 | orchestrator | Monday 09 March 2026 01:17:22 +0000 (0:00:00.769) 0:02:23.025 ********** 2026-03-09 01:20:06.616006 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:06.616016 | orchestrator | 2026-03-09 01:20:06.616027 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-09 01:20:06.616038 | orchestrator | Monday 09 March 2026 01:17:26 +0000 (0:00:03.549) 0:02:26.575 ********** 2026-03-09 01:20:06.616049 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:06.616059 | orchestrator | 2026-03-09 01:20:06.616070 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-09 01:20:06.616080 | orchestrator | Monday 09 March 2026 01:17:29 +0000 (0:00:03.585) 0:02:30.160 ********** 2026-03-09 01:20:06.616087 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-09 01:20:06.616093 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-09 01:20:06.616099 | orchestrator | 2026-03-09 01:20:06.616106 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-09 01:20:06.616112 | orchestrator | Monday 09 March 2026 01:17:37 +0000 (0:00:07.341) 0:02:37.501 ********** 2026-03-09 01:20:06.616118 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:06.616124 | orchestrator | 2026-03-09 01:20:06.616131 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-09 01:20:06.616137 | orchestrator | Monday 09 March 2026 01:17:40 +0000 (0:00:03.671) 0:02:41.173 ********** 2026-03-09 01:20:06.616143 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:20:06.616149 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:20:06.616155 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:20:06.616161 | orchestrator | 2026-03-09 01:20:06.616167 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-09 01:20:06.616174 | orchestrator | Monday 09 March 2026 01:17:41 +0000 (0:00:00.357) 0:02:41.530 ********** 2026-03-09 01:20:06.616182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:06.616239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:06.616248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:06.616256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:06.616264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:06.616270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:06.616283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616447 | orchestrator | 2026-03-09 01:20:06.616454 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-09 01:20:06.616461 | orchestrator | Monday 09 March 2026 01:17:43 +0000 (0:00:02.801) 0:02:44.332 ********** 2026-03-09 01:20:06.616467 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:20:06.616474 | orchestrator | 2026-03-09 01:20:06.616480 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-09 01:20:06.616487 | orchestrator | Monday 09 March 2026 01:17:44 +0000 (0:00:00.148) 0:02:44.480 ********** 2026-03-09 01:20:06.616493 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:20:06.616499 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:20:06.616506 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:20:06.616512 | orchestrator | 2026-03-09 01:20:06.616518 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-09 01:20:06.616525 | orchestrator | Monday 09 March 2026 01:17:44 +0000 (0:00:00.629) 0:02:45.110 ********** 2026-03-09 01:20:06.616532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:06.616539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:06.616551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.616558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.616574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:06.616581 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:20:06.616609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:06.616617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:06.616623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.616636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.616642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:06.616649 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:20:06.616676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:06.616684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:06.616691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.616697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.616708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:06.616715 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:20:06.616721 | orchestrator | 2026-03-09 01:20:06.616728 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:20:06.616734 | orchestrator | Monday 09 March 2026 01:17:45 +0000 (0:00:00.854) 0:02:45.964 ********** 2026-03-09 01:20:06.616740 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:20:06.616747 | orchestrator | 2026-03-09 01:20:06.616753 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-09 01:20:06.616759 | orchestrator | Monday 09 March 2026 01:17:46 +0000 (0:00:00.595) 0:02:46.560 ********** 2026-03-09 01:20:06.616769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:06.616794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:06.616802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:06.616817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:06.616824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:06.616831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:06.616841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:06.616918 | orchestrator | 2026-03-09 01:20:06.616925 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-09 01:20:06.616936 | orchestrator | Monday 09 March 2026 01:17:51 +0000 (0:00:05.751) 0:02:52.312 ********** 2026-03-09 01:20:06.616942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:06.616949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:06.616956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.616962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.616975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:06.616982 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:20:06.616989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:06.617000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:06.617007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.617014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.617020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:06.617027 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:20:06.617040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:06.617053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:06.617065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.617076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.617085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:06.617097 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:20:06.617107 | orchestrator | 2026-03-09 01:20:06.617117 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-09 01:20:06.617128 | orchestrator | Monday 09 March 2026 01:17:52 +0000 (0:00:00.793) 0:02:53.105 ********** 2026-03-09 01:20:06.617138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:06.617150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:06.617164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.617171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.617177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:06.617184 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:20:06.617191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:06.617200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:06.617211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.617222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.617229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:06.617236 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:20:06.617242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:20:06.617249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:20:06.617258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.617270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:20:06.617281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:20:06.617287 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:20:06.617294 | orchestrator | 2026-03-09 01:20:06.617300 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-09 01:20:06.617307 | orchestrator | Monday 09 March 2026 01:17:53 +0000 (0:00:01.083) 0:02:54.189 ********** 2026-03-09 01:20:06.617313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:06.617320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:06.617330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:06.617344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:06.617351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:06.617358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:06.617364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617457 | orchestrator | 2026-03-09 01:20:06.617463 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-09 01:20:06.617470 | orchestrator | Monday 09 March 2026 01:17:58 +0000 (0:00:05.138) 0:02:59.327 ********** 2026-03-09 01:20:06.617476 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-09 01:20:06.617486 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-09 01:20:06.617493 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-09 01:20:06.617499 | orchestrator | 2026-03-09 01:20:06.617505 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-09 01:20:06.617512 | orchestrator | Monday 09 March 2026 01:18:01 +0000 (0:00:02.026) 0:03:01.354 ********** 2026-03-09 01:20:06.617526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:06.617534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:06.617541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:06.617548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:06.617554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:06.617568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:06.617578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:06.617650 | orchestrator | 2026-03-09 01:20:06.617657 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-09 01:20:06.617664 | orchestrator | Monday 09 March 2026 01:18:18 +0000 (0:00:17.033) 0:03:18.388 ********** 2026-03-09 01:20:06.617670 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.617676 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:06.617683 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:06.617689 | orchestrator | 2026-03-09 01:20:06.617695 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-09 01:20:06.617702 | orchestrator | Monday 09 March 2026 01:18:19 +0000 (0:00:01.570) 0:03:19.958 ********** 2026-03-09 01:20:06.617708 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-09 01:20:06.617714 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-09 01:20:06.617721 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-09 01:20:06.617727 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-09 01:20:06.617733 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-09 01:20:06.617740 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-09 01:20:06.617746 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-09 01:20:06.617758 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-09 01:20:06.617765 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-09 01:20:06.617771 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-09 01:20:06.617777 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-09 01:20:06.617784 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-09 01:20:06.617790 | orchestrator | 2026-03-09 01:20:06.617796 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-09 01:20:06.617802 | orchestrator | Monday 09 March 2026 01:18:25 +0000 (0:00:05.634) 0:03:25.593 ********** 2026-03-09 01:20:06.617809 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-09 01:20:06.617815 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-09 01:20:06.617821 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-09 01:20:06.617828 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-09 01:20:06.617834 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-09 01:20:06.617840 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-09 01:20:06.617846 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-09 01:20:06.617852 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-09 01:20:06.617859 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-09 01:20:06.617865 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-09 01:20:06.617871 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-09 01:20:06.617878 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-09 01:20:06.617884 | orchestrator | 2026-03-09 01:20:06.617890 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-09 01:20:06.617896 | orchestrator | Monday 09 March 2026 01:18:31 +0000 (0:00:05.943) 0:03:31.537 ********** 2026-03-09 01:20:06.617903 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-09 01:20:06.617909 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-09 01:20:06.617918 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-09 01:20:06.617925 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-09 01:20:06.617931 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-09 01:20:06.617937 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-09 01:20:06.617944 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-09 01:20:06.617950 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-09 01:20:06.617959 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-09 01:20:06.617966 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-09 01:20:06.617972 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-09 01:20:06.617978 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-09 01:20:06.617984 | orchestrator | 2026-03-09 01:20:06.617991 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-09 01:20:06.617997 | orchestrator | Monday 09 March 2026 01:18:36 +0000 (0:00:05.305) 0:03:36.842 ********** 2026-03-09 01:20:06.618003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:06.618014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:06.618047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:20:06.618057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:06.618068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:06.618075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:20:06.618082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.618092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.618099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.618106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.618117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.618126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:20:06.618133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:06.618144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:06.618150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:20:06.618157 | orchestrator | 2026-03-09 01:20:06.618163 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:20:06.618170 | orchestrator | Monday 09 March 2026 01:18:40 +0000 (0:00:03.771) 0:03:40.614 ********** 2026-03-09 01:20:06.618176 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:20:06.618182 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:20:06.618189 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:20:06.618195 | orchestrator | 2026-03-09 01:20:06.618201 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-09 01:20:06.618208 | orchestrator | Monday 09 March 2026 01:18:40 +0000 (0:00:00.381) 0:03:40.995 ********** 2026-03-09 01:20:06.618214 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.618220 | orchestrator | 2026-03-09 01:20:06.618226 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-09 01:20:06.618232 | orchestrator | Monday 09 March 2026 01:18:42 +0000 (0:00:02.255) 0:03:43.251 ********** 2026-03-09 01:20:06.618239 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.618245 | orchestrator | 2026-03-09 01:20:06.618251 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-09 01:20:06.618258 | orchestrator | Monday 09 March 2026 01:18:45 +0000 (0:00:02.213) 0:03:45.464 ********** 2026-03-09 01:20:06.618264 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.618270 | orchestrator | 2026-03-09 01:20:06.618276 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-09 01:20:06.618283 | orchestrator | Monday 09 March 2026 01:18:47 +0000 (0:00:02.523) 0:03:47.988 ********** 2026-03-09 01:20:06.618289 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.618295 | orchestrator | 2026-03-09 01:20:06.618301 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-09 01:20:06.618308 | orchestrator | Monday 09 March 2026 01:18:50 +0000 (0:00:02.820) 0:03:50.808 ********** 2026-03-09 01:20:06.618314 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.618320 | orchestrator | 2026-03-09 01:20:06.618326 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-09 01:20:06.618333 | orchestrator | Monday 09 March 2026 01:19:13 +0000 (0:00:23.097) 0:04:13.906 ********** 2026-03-09 01:20:06.618339 | orchestrator | 2026-03-09 01:20:06.618348 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-09 01:20:06.618359 | orchestrator | Monday 09 March 2026 01:19:13 +0000 (0:00:00.072) 0:04:13.979 ********** 2026-03-09 01:20:06.618365 | orchestrator | 2026-03-09 01:20:06.618371 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-09 01:20:06.618378 | orchestrator | Monday 09 March 2026 01:19:13 +0000 (0:00:00.066) 0:04:14.045 ********** 2026-03-09 01:20:06.618384 | orchestrator | 2026-03-09 01:20:06.618409 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-09 01:20:06.618421 | orchestrator | Monday 09 March 2026 01:19:13 +0000 (0:00:00.071) 0:04:14.117 ********** 2026-03-09 01:20:06.618428 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.618434 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:06.618440 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:06.618446 | orchestrator | 2026-03-09 01:20:06.618453 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-09 01:20:06.618459 | orchestrator | Monday 09 March 2026 01:19:30 +0000 (0:00:16.319) 0:04:30.436 ********** 2026-03-09 01:20:06.618465 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.618471 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:06.618477 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:06.618484 | orchestrator | 2026-03-09 01:20:06.618490 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-09 01:20:06.618496 | orchestrator | Monday 09 March 2026 01:19:42 +0000 (0:00:12.103) 0:04:42.540 ********** 2026-03-09 01:20:06.618502 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.618509 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:06.618515 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:06.618521 | orchestrator | 2026-03-09 01:20:06.618527 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-09 01:20:06.618533 | orchestrator | Monday 09 March 2026 01:19:48 +0000 (0:00:06.433) 0:04:48.973 ********** 2026-03-09 01:20:06.618540 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.618546 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:06.618552 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:06.618558 | orchestrator | 2026-03-09 01:20:06.618564 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-09 01:20:06.618571 | orchestrator | Monday 09 March 2026 01:19:59 +0000 (0:00:10.931) 0:04:59.905 ********** 2026-03-09 01:20:06.618577 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:20:06.618583 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:20:06.618589 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:20:06.618595 | orchestrator | 2026-03-09 01:20:06.618601 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:20:06.618608 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:20:06.618615 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 01:20:06.618621 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 01:20:06.618627 | orchestrator | 2026-03-09 01:20:06.618634 | orchestrator | 2026-03-09 01:20:06.618640 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:20:06.618646 | orchestrator | Monday 09 March 2026 01:20:05 +0000 (0:00:06.100) 0:05:06.006 ********** 2026-03-09 01:20:06.618652 | orchestrator | =============================================================================== 2026-03-09 01:20:06.618659 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.10s 2026-03-09 01:20:06.618665 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.71s 2026-03-09 01:20:06.618671 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.41s 2026-03-09 01:20:06.618681 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.03s 2026-03-09 01:20:06.618688 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.32s 2026-03-09 01:20:06.618694 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 12.10s 2026-03-09 01:20:06.618700 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.84s 2026-03-09 01:20:06.618706 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.93s 2026-03-09 01:20:06.618713 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.67s 2026-03-09 01:20:06.618719 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.10s 2026-03-09 01:20:06.618725 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.74s 2026-03-09 01:20:06.618731 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.34s 2026-03-09 01:20:06.618737 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 6.43s 2026-03-09 01:20:06.618744 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.30s 2026-03-09 01:20:06.618750 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 6.10s 2026-03-09 01:20:06.618756 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.06s 2026-03-09 01:20:06.618762 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.94s 2026-03-09 01:20:06.618768 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.82s 2026-03-09 01:20:06.618775 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.75s 2026-03-09 01:20:06.618781 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.63s 2026-03-09 01:20:06.618790 | orchestrator | 2026-03-09 01:20:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:09.666950 | orchestrator | 2026-03-09 01:20:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:12.714890 | orchestrator | 2026-03-09 01:20:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:15.757288 | orchestrator | 2026-03-09 01:20:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:18.802762 | orchestrator | 2026-03-09 01:20:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:21.848230 | orchestrator | 2026-03-09 01:20:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:24.887319 | orchestrator | 2026-03-09 01:20:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:27.929381 | orchestrator | 2026-03-09 01:20:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:30.967877 | orchestrator | 2026-03-09 01:20:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:34.031643 | orchestrator | 2026-03-09 01:20:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:37.071730 | orchestrator | 2026-03-09 01:20:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:40.105738 | orchestrator | 2026-03-09 01:20:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:43.145894 | orchestrator | 2026-03-09 01:20:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:46.192919 | orchestrator | 2026-03-09 01:20:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:49.233915 | orchestrator | 2026-03-09 01:20:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:52.281748 | orchestrator | 2026-03-09 01:20:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:55.327629 | orchestrator | 2026-03-09 01:20:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:20:58.365472 | orchestrator | 2026-03-09 01:20:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:21:01.411394 | orchestrator | 2026-03-09 01:21:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:21:04.454284 | orchestrator | 2026-03-09 01:21:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:21:07.494775 | orchestrator | 2026-03-09 01:21:07.852317 | orchestrator | 2026-03-09 01:21:07.856993 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Mar 9 01:21:07 UTC 2026 2026-03-09 01:21:07.857044 | orchestrator | 2026-03-09 01:21:08.366342 | orchestrator | ok: Runtime: 0:38:09.422820 2026-03-09 01:21:08.644985 | 2026-03-09 01:21:08.645130 | TASK [Bootstrap services] 2026-03-09 01:21:09.409884 | orchestrator | 2026-03-09 01:21:09.410139 | orchestrator | # BOOTSTRAP 2026-03-09 01:21:09.410166 | orchestrator | 2026-03-09 01:21:09.410183 | orchestrator | + set -e 2026-03-09 01:21:09.410201 | orchestrator | + echo 2026-03-09 01:21:09.410219 | orchestrator | + echo '# BOOTSTRAP' 2026-03-09 01:21:09.410241 | orchestrator | + echo 2026-03-09 01:21:09.410290 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-09 01:21:09.419663 | orchestrator | + set -e 2026-03-09 01:21:09.419759 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-09 01:21:15.231564 | orchestrator | 2026-03-09 01:21:15 | INFO  | It takes a moment until task beed4045-e669-4de6-8ff7-1696896fc26f (flavor-manager) has been started and output is visible here. 2026-03-09 01:21:23.989164 | orchestrator | 2026-03-09 01:21:18 | INFO  | Flavor SCS-1L-1 created 2026-03-09 01:21:23.989313 | orchestrator | 2026-03-09 01:21:19 | INFO  | Flavor SCS-1L-1-5 created 2026-03-09 01:21:23.989340 | orchestrator | 2026-03-09 01:21:19 | INFO  | Flavor SCS-1V-2 created 2026-03-09 01:21:23.989356 | orchestrator | 2026-03-09 01:21:19 | INFO  | Flavor SCS-1V-2-5 created 2026-03-09 01:21:23.989373 | orchestrator | 2026-03-09 01:21:19 | INFO  | Flavor SCS-1V-4 created 2026-03-09 01:21:23.989389 | orchestrator | 2026-03-09 01:21:20 | INFO  | Flavor SCS-1V-4-10 created 2026-03-09 01:21:23.989404 | orchestrator | 2026-03-09 01:21:20 | INFO  | Flavor SCS-1V-8 created 2026-03-09 01:21:23.989467 | orchestrator | 2026-03-09 01:21:20 | INFO  | Flavor SCS-1V-8-20 created 2026-03-09 01:21:23.989490 | orchestrator | 2026-03-09 01:21:20 | INFO  | Flavor SCS-2V-4 created 2026-03-09 01:21:23.989500 | orchestrator | 2026-03-09 01:21:20 | INFO  | Flavor SCS-2V-4-10 created 2026-03-09 01:21:23.989509 | orchestrator | 2026-03-09 01:21:20 | INFO  | Flavor SCS-2V-8 created 2026-03-09 01:21:23.989518 | orchestrator | 2026-03-09 01:21:21 | INFO  | Flavor SCS-2V-8-20 created 2026-03-09 01:21:23.989526 | orchestrator | 2026-03-09 01:21:21 | INFO  | Flavor SCS-2V-16 created 2026-03-09 01:21:23.989535 | orchestrator | 2026-03-09 01:21:21 | INFO  | Flavor SCS-2V-16-50 created 2026-03-09 01:21:23.989544 | orchestrator | 2026-03-09 01:21:21 | INFO  | Flavor SCS-4V-8 created 2026-03-09 01:21:23.989553 | orchestrator | 2026-03-09 01:21:21 | INFO  | Flavor SCS-4V-8-20 created 2026-03-09 01:21:23.989561 | orchestrator | 2026-03-09 01:21:22 | INFO  | Flavor SCS-4V-16 created 2026-03-09 01:21:23.989570 | orchestrator | 2026-03-09 01:21:22 | INFO  | Flavor SCS-4V-16-50 created 2026-03-09 01:21:23.989579 | orchestrator | 2026-03-09 01:21:22 | INFO  | Flavor SCS-4V-32 created 2026-03-09 01:21:23.989587 | orchestrator | 2026-03-09 01:21:22 | INFO  | Flavor SCS-4V-32-100 created 2026-03-09 01:21:23.989596 | orchestrator | 2026-03-09 01:21:22 | INFO  | Flavor SCS-8V-16 created 2026-03-09 01:21:23.989605 | orchestrator | 2026-03-09 01:21:22 | INFO  | Flavor SCS-8V-16-50 created 2026-03-09 01:21:23.989614 | orchestrator | 2026-03-09 01:21:22 | INFO  | Flavor SCS-8V-32 created 2026-03-09 01:21:23.989623 | orchestrator | 2026-03-09 01:21:23 | INFO  | Flavor SCS-8V-32-100 created 2026-03-09 01:21:23.989631 | orchestrator | 2026-03-09 01:21:23 | INFO  | Flavor SCS-16V-32 created 2026-03-09 01:21:23.989640 | orchestrator | 2026-03-09 01:21:23 | INFO  | Flavor SCS-16V-32-100 created 2026-03-09 01:21:23.989649 | orchestrator | 2026-03-09 01:21:23 | INFO  | Flavor SCS-2V-4-20s created 2026-03-09 01:21:23.989657 | orchestrator | 2026-03-09 01:21:23 | INFO  | Flavor SCS-4V-8-50s created 2026-03-09 01:21:23.989666 | orchestrator | 2026-03-09 01:21:23 | INFO  | Flavor SCS-8V-32-100s created 2026-03-09 01:21:26.673236 | orchestrator | 2026-03-09 01:21:26 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-09 01:21:26.754974 | orchestrator | 2026-03-09 01:21:26 | INFO  | Task 45092de8-5079-46c0-a2e5-55659acb6cec (bootstrap-basic) was prepared for execution. 2026-03-09 01:21:26.755044 | orchestrator | 2026-03-09 01:21:26 | INFO  | It takes a moment until task 45092de8-5079-46c0-a2e5-55659acb6cec (bootstrap-basic) has been started and output is visible here. 2026-03-09 01:22:16.784009 | orchestrator | 2026-03-09 01:22:16.784098 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-09 01:22:16.784107 | orchestrator | 2026-03-09 01:22:16.784113 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 01:22:16.784119 | orchestrator | Monday 09 March 2026 01:21:31 +0000 (0:00:00.081) 0:00:00.081 ********** 2026-03-09 01:22:16.784125 | orchestrator | ok: [localhost] 2026-03-09 01:22:16.784131 | orchestrator | 2026-03-09 01:22:16.784137 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-09 01:22:16.784142 | orchestrator | Monday 09 March 2026 01:21:33 +0000 (0:00:01.968) 0:00:02.050 ********** 2026-03-09 01:22:16.784147 | orchestrator | ok: [localhost] 2026-03-09 01:22:16.784153 | orchestrator | 2026-03-09 01:22:16.784158 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-09 01:22:16.784163 | orchestrator | Monday 09 March 2026 01:21:44 +0000 (0:00:10.552) 0:00:12.602 ********** 2026-03-09 01:22:16.784169 | orchestrator | changed: [localhost] 2026-03-09 01:22:16.784174 | orchestrator | 2026-03-09 01:22:16.784180 | orchestrator | TASK [Create public network] *************************************************** 2026-03-09 01:22:16.784185 | orchestrator | Monday 09 March 2026 01:21:52 +0000 (0:00:08.027) 0:00:20.629 ********** 2026-03-09 01:22:16.784191 | orchestrator | changed: [localhost] 2026-03-09 01:22:16.784196 | orchestrator | 2026-03-09 01:22:16.784201 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-09 01:22:16.784207 | orchestrator | Monday 09 March 2026 01:21:57 +0000 (0:00:05.286) 0:00:25.916 ********** 2026-03-09 01:22:16.784215 | orchestrator | changed: [localhost] 2026-03-09 01:22:16.784220 | orchestrator | 2026-03-09 01:22:16.784226 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-09 01:22:16.784231 | orchestrator | Monday 09 March 2026 01:22:04 +0000 (0:00:06.457) 0:00:32.374 ********** 2026-03-09 01:22:16.784236 | orchestrator | changed: [localhost] 2026-03-09 01:22:16.784241 | orchestrator | 2026-03-09 01:22:16.784247 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-09 01:22:16.784252 | orchestrator | Monday 09 March 2026 01:22:08 +0000 (0:00:04.731) 0:00:37.106 ********** 2026-03-09 01:22:16.784257 | orchestrator | changed: [localhost] 2026-03-09 01:22:16.784262 | orchestrator | 2026-03-09 01:22:16.784268 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-09 01:22:16.784280 | orchestrator | Monday 09 March 2026 01:22:12 +0000 (0:00:03.930) 0:00:41.037 ********** 2026-03-09 01:22:16.784285 | orchestrator | ok: [localhost] 2026-03-09 01:22:16.784290 | orchestrator | 2026-03-09 01:22:16.784295 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:22:16.784301 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:22:16.784307 | orchestrator | 2026-03-09 01:22:16.784313 | orchestrator | 2026-03-09 01:22:16.784318 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:22:16.784323 | orchestrator | Monday 09 March 2026 01:22:16 +0000 (0:00:03.718) 0:00:44.755 ********** 2026-03-09 01:22:16.784328 | orchestrator | =============================================================================== 2026-03-09 01:22:16.784333 | orchestrator | Get volume type LUKS --------------------------------------------------- 10.55s 2026-03-09 01:22:16.784339 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.03s 2026-03-09 01:22:16.784344 | orchestrator | Set public network to default ------------------------------------------- 6.46s 2026-03-09 01:22:16.784349 | orchestrator | Create public network --------------------------------------------------- 5.29s 2026-03-09 01:22:16.784373 | orchestrator | Create public subnet ---------------------------------------------------- 4.73s 2026-03-09 01:22:16.784378 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.93s 2026-03-09 01:22:16.784383 | orchestrator | Create manager role ----------------------------------------------------- 3.72s 2026-03-09 01:22:16.784388 | orchestrator | Gathering Facts --------------------------------------------------------- 1.97s 2026-03-09 01:22:19.316276 | orchestrator | 2026-03-09 01:22:19 | INFO  | It takes a moment until task 0ff74792-7ec0-4c82-a41b-7b8ad806f96c (image-manager) has been started and output is visible here. 2026-03-09 01:23:01.201987 | orchestrator | 2026-03-09 01:22:22 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-09 01:23:01.202113 | orchestrator | 2026-03-09 01:22:22 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-09 01:23:01.202124 | orchestrator | 2026-03-09 01:22:22 | INFO  | Importing image Cirros 0.6.2 2026-03-09 01:23:01.202130 | orchestrator | 2026-03-09 01:22:22 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-09 01:23:01.202134 | orchestrator | 2026-03-09 01:22:24 | INFO  | Waiting for import to complete... 2026-03-09 01:23:01.202138 | orchestrator | 2026-03-09 01:22:35 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-09 01:23:01.202143 | orchestrator | 2026-03-09 01:22:35 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-09 01:23:01.202148 | orchestrator | 2026-03-09 01:22:35 | INFO  | Setting internal_version = 0.6.2 2026-03-09 01:23:01.202152 | orchestrator | 2026-03-09 01:22:35 | INFO  | Setting image_original_user = cirros 2026-03-09 01:23:01.202157 | orchestrator | 2026-03-09 01:22:35 | INFO  | Adding tag os:cirros 2026-03-09 01:23:01.202161 | orchestrator | 2026-03-09 01:22:36 | INFO  | Setting property architecture: x86_64 2026-03-09 01:23:01.202165 | orchestrator | 2026-03-09 01:22:36 | INFO  | Setting property hw_disk_bus: scsi 2026-03-09 01:23:01.202169 | orchestrator | 2026-03-09 01:22:36 | INFO  | Setting property hw_rng_model: virtio 2026-03-09 01:23:01.202173 | orchestrator | 2026-03-09 01:22:37 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-09 01:23:01.202178 | orchestrator | 2026-03-09 01:22:37 | INFO  | Setting property hw_watchdog_action: reset 2026-03-09 01:23:01.202182 | orchestrator | 2026-03-09 01:22:37 | INFO  | Setting property hypervisor_type: qemu 2026-03-09 01:23:01.202186 | orchestrator | 2026-03-09 01:22:37 | INFO  | Setting property os_distro: cirros 2026-03-09 01:23:01.202190 | orchestrator | 2026-03-09 01:22:37 | INFO  | Setting property os_purpose: minimal 2026-03-09 01:23:01.202194 | orchestrator | 2026-03-09 01:22:38 | INFO  | Setting property replace_frequency: never 2026-03-09 01:23:01.202198 | orchestrator | 2026-03-09 01:22:38 | INFO  | Setting property uuid_validity: none 2026-03-09 01:23:01.202202 | orchestrator | 2026-03-09 01:22:38 | INFO  | Setting property provided_until: none 2026-03-09 01:23:01.202205 | orchestrator | 2026-03-09 01:22:38 | INFO  | Setting property image_description: Cirros 2026-03-09 01:23:01.202209 | orchestrator | 2026-03-09 01:22:39 | INFO  | Setting property image_name: Cirros 2026-03-09 01:23:01.202213 | orchestrator | 2026-03-09 01:22:39 | INFO  | Setting property internal_version: 0.6.2 2026-03-09 01:23:01.202217 | orchestrator | 2026-03-09 01:22:39 | INFO  | Setting property image_original_user: cirros 2026-03-09 01:23:01.202220 | orchestrator | 2026-03-09 01:22:39 | INFO  | Setting property os_version: 0.6.2 2026-03-09 01:23:01.202239 | orchestrator | 2026-03-09 01:22:40 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-09 01:23:01.202254 | orchestrator | 2026-03-09 01:22:40 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-09 01:23:01.202258 | orchestrator | 2026-03-09 01:22:40 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-09 01:23:01.202261 | orchestrator | 2026-03-09 01:22:40 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-09 01:23:01.202265 | orchestrator | 2026-03-09 01:22:40 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-09 01:23:01.202269 | orchestrator | 2026-03-09 01:22:41 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-09 01:23:01.202273 | orchestrator | 2026-03-09 01:22:41 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-09 01:23:01.202279 | orchestrator | 2026-03-09 01:22:41 | INFO  | Importing image Cirros 0.6.3 2026-03-09 01:23:01.202283 | orchestrator | 2026-03-09 01:22:41 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-09 01:23:01.202287 | orchestrator | 2026-03-09 01:22:42 | INFO  | Waiting for image to leave queued state... 2026-03-09 01:23:01.202290 | orchestrator | 2026-03-09 01:22:45 | INFO  | Waiting for import to complete... 2026-03-09 01:23:01.202294 | orchestrator | 2026-03-09 01:22:55 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-09 01:23:01.202310 | orchestrator | 2026-03-09 01:22:55 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-09 01:23:01.202314 | orchestrator | 2026-03-09 01:22:55 | INFO  | Setting internal_version = 0.6.3 2026-03-09 01:23:01.202318 | orchestrator | 2026-03-09 01:22:55 | INFO  | Setting image_original_user = cirros 2026-03-09 01:23:01.202322 | orchestrator | 2026-03-09 01:22:55 | INFO  | Adding tag os:cirros 2026-03-09 01:23:01.202325 | orchestrator | 2026-03-09 01:22:55 | INFO  | Setting property architecture: x86_64 2026-03-09 01:23:01.202329 | orchestrator | 2026-03-09 01:22:55 | INFO  | Setting property hw_disk_bus: scsi 2026-03-09 01:23:01.202333 | orchestrator | 2026-03-09 01:22:56 | INFO  | Setting property hw_rng_model: virtio 2026-03-09 01:23:01.202337 | orchestrator | 2026-03-09 01:22:56 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-09 01:23:01.202340 | orchestrator | 2026-03-09 01:22:56 | INFO  | Setting property hw_watchdog_action: reset 2026-03-09 01:23:01.202344 | orchestrator | 2026-03-09 01:22:56 | INFO  | Setting property hypervisor_type: qemu 2026-03-09 01:23:01.202348 | orchestrator | 2026-03-09 01:22:57 | INFO  | Setting property os_distro: cirros 2026-03-09 01:23:01.202352 | orchestrator | 2026-03-09 01:22:57 | INFO  | Setting property os_purpose: minimal 2026-03-09 01:23:01.202356 | orchestrator | 2026-03-09 01:22:57 | INFO  | Setting property replace_frequency: never 2026-03-09 01:23:01.202360 | orchestrator | 2026-03-09 01:22:57 | INFO  | Setting property uuid_validity: none 2026-03-09 01:23:01.202363 | orchestrator | 2026-03-09 01:22:58 | INFO  | Setting property provided_until: none 2026-03-09 01:23:01.202367 | orchestrator | 2026-03-09 01:22:58 | INFO  | Setting property image_description: Cirros 2026-03-09 01:23:01.202371 | orchestrator | 2026-03-09 01:22:58 | INFO  | Setting property image_name: Cirros 2026-03-09 01:23:01.202375 | orchestrator | 2026-03-09 01:22:58 | INFO  | Setting property internal_version: 0.6.3 2026-03-09 01:23:01.202378 | orchestrator | 2026-03-09 01:22:59 | INFO  | Setting property image_original_user: cirros 2026-03-09 01:23:01.202388 | orchestrator | 2026-03-09 01:22:59 | INFO  | Setting property os_version: 0.6.3 2026-03-09 01:23:01.202433 | orchestrator | 2026-03-09 01:22:59 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-09 01:23:01.202440 | orchestrator | 2026-03-09 01:23:00 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-09 01:23:01.202447 | orchestrator | 2026-03-09 01:23:00 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-09 01:23:01.202452 | orchestrator | 2026-03-09 01:23:00 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-09 01:23:01.202456 | orchestrator | 2026-03-09 01:23:00 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-09 01:23:01.525053 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-09 01:23:03.802802 | orchestrator | 2026-03-09 01:23:03 | INFO  | date: 2026-03-08 2026-03-09 01:23:03.802900 | orchestrator | 2026-03-09 01:23:03 | INFO  | image: octavia-amphora-haproxy-2024.2.20260308.qcow2 2026-03-09 01:23:03.802938 | orchestrator | 2026-03-09 01:23:03 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260308.qcow2 2026-03-09 01:23:03.802950 | orchestrator | 2026-03-09 01:23:03 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260308.qcow2.CHECKSUM 2026-03-09 01:23:03.948164 | orchestrator | 2026-03-09 01:23:03 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/9a456cfc94b04f73a04fd6c3a5a67d43/work/logs" 2026-03-09 01:23:41.656588 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/9a456cfc94b04f73a04fd6c3a5a67d43/work/artifacts" 2026-03-09 01:23:41.937715 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/9a456cfc94b04f73a04fd6c3a5a67d43/work/docs" 2026-03-09 01:23:41.963528 | 2026-03-09 01:23:41.963703 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-09 01:23:42.896005 | orchestrator | changed: .d..t...... ./ 2026-03-09 01:23:42.896360 | orchestrator | changed: All items complete 2026-03-09 01:23:42.896418 | 2026-03-09 01:23:43.616178 | orchestrator | changed: .d..t...... ./ 2026-03-09 01:23:44.374815 | orchestrator | changed: .d..t...... ./ 2026-03-09 01:23:44.402232 | 2026-03-09 01:23:44.402384 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-09 01:23:44.444583 | orchestrator | skipping: Conditional result was False 2026-03-09 01:23:44.446914 | orchestrator | skipping: Conditional result was False 2026-03-09 01:23:44.464902 | 2026-03-09 01:23:44.465056 | PLAY RECAP 2026-03-09 01:23:44.465141 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-09 01:23:44.465186 | 2026-03-09 01:23:44.597094 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-09 01:23:44.598170 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-09 01:23:45.366609 | 2026-03-09 01:23:45.366766 | PLAY [Base post] 2026-03-09 01:23:45.382095 | 2026-03-09 01:23:45.382237 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-09 01:23:46.452334 | orchestrator | changed 2026-03-09 01:23:46.464333 | 2026-03-09 01:23:46.464463 | PLAY RECAP 2026-03-09 01:23:46.464538 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-09 01:23:46.464618 | 2026-03-09 01:23:46.591439 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-09 01:23:46.593939 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-09 01:23:47.394890 | 2026-03-09 01:23:47.395081 | PLAY [Base post-logs] 2026-03-09 01:23:47.406077 | 2026-03-09 01:23:47.406219 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-09 01:23:47.878797 | localhost | changed 2026-03-09 01:23:47.888713 | 2026-03-09 01:23:47.888855 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-09 01:23:47.927574 | localhost | ok 2026-03-09 01:23:47.932277 | 2026-03-09 01:23:47.932394 | TASK [Set zuul-log-path fact] 2026-03-09 01:23:47.960479 | localhost | ok 2026-03-09 01:23:47.976639 | 2026-03-09 01:23:47.976782 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-09 01:23:48.015030 | localhost | ok 2026-03-09 01:23:48.022399 | 2026-03-09 01:23:48.022574 | TASK [upload-logs : Create log directories] 2026-03-09 01:23:48.542960 | localhost | changed 2026-03-09 01:23:48.547682 | 2026-03-09 01:23:48.547853 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-09 01:23:49.071422 | localhost -> localhost | ok: Runtime: 0:00:00.008062 2026-03-09 01:23:49.081257 | 2026-03-09 01:23:49.081460 | TASK [upload-logs : Upload logs to log server] 2026-03-09 01:23:49.647315 | localhost | Output suppressed because no_log was given 2026-03-09 01:23:49.649436 | 2026-03-09 01:23:49.649554 | LOOP [upload-logs : Compress console log and json output] 2026-03-09 01:23:49.702744 | localhost | skipping: Conditional result was False 2026-03-09 01:23:49.707901 | localhost | skipping: Conditional result was False 2026-03-09 01:23:49.718321 | 2026-03-09 01:23:49.718492 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-09 01:23:49.767568 | localhost | skipping: Conditional result was False 2026-03-09 01:23:49.768278 | 2026-03-09 01:23:49.771442 | localhost | skipping: Conditional result was False 2026-03-09 01:23:49.779820 | 2026-03-09 01:23:49.780280 | LOOP [upload-logs : Upload console log and json output]